url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4997/comments
https://api.github.com/repos/huggingface/datasets/issues/4997/events
https://github.com/huggingface/datasets/pull/4997
1,379,430,711
PR_kwDODunzps4_RrBU
4,997
Add support for parsing JSON files in array form
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-09-20T13:31:26Z"
"2022-09-20T15:42:40Z"
"2022-09-20T15:40:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4997.diff", "html_url": "https://github.com/huggingface/datasets/pull/4997", "merged_at": "2022-09-20T15:40:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/4997.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4997" }
Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html), which, if set to `True`, would allow us to read in chunks. Fixes https://github.com/huggingface/datasets/issues/4963
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4997/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4997/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4996/comments
https://api.github.com/repos/huggingface/datasets/issues/4996/events
https://github.com/huggingface/datasets/issues/4996
1,379,345,161
I_kwDODunzps5SNyMJ
4,996
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
2
"2022-09-20T12:32:07Z"
"2022-09-27T12:35:44Z"
"2022-09-27T12:35:44Z"
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr ### Description ``` Error code: StreamingRowsError Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token) File "/src/services/worker/src/worker/utils.py", line 123, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__ for key, example in self._iter(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter yield from ex_iterable File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples dataset = Dataset.load_from_disk(filepath) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file: FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' ``` Is it an error with the dataset script, or the data itself, @huggingface/datasets? https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4996/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4995/comments
https://api.github.com/repos/huggingface/datasets/issues/4995/events
https://github.com/huggingface/datasets/issues/4995
1,379,108,482
I_kwDODunzps5SM4aC
4,995
Get a specific Exception when the dataset has no data
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
0
"2022-09-20T09:31:59Z"
"2022-09-21T12:21:25Z"
"2022-09-21T12:21:25Z"
CONTRIBUTOR
null
null
null
In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files. In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data. To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files. It could be done by raising a custom exception, for example, `NoDataError`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4995/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4995/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4994/comments
https://api.github.com/repos/huggingface/datasets/issues/4994/events
https://github.com/huggingface/datasets/issues/4994
1,379,084,015
I_kwDODunzps5SMybv
4,994
delete the hardcoded license list in `datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
"2022-09-20T09:14:41Z"
"2022-09-22T11:45:47Z"
"2022-09-22T11:45:47Z"
MEMBER
null
null
null
> Feel free to delete the license list in `datasets` [...] > > Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.) _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_ > [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? _Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4994/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4993/comments
https://api.github.com/repos/huggingface/datasets/issues/4993/events
https://github.com/huggingface/datasets/pull/4993
1,379,044,435
PR_kwDODunzps4_QYas
4,993
fix: avoid casting tuples after Dataset.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/5697926?v=4", "events_url": "https://api.github.com/users/szmoro/events{/privacy}", "followers_url": "https://api.github.com/users/szmoro/followers", "following_url": "https://api.github.com/users/szmoro/following{/other_user}", "gists_url": "https://api.github.com/users/szmoro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/szmoro", "id": 5697926, "login": "szmoro", "node_id": "MDQ6VXNlcjU2OTc5MjY=", "organizations_url": "https://api.github.com/users/szmoro/orgs", "received_events_url": "https://api.github.com/users/szmoro/received_events", "repos_url": "https://api.github.com/users/szmoro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/szmoro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szmoro/subscriptions", "type": "User", "url": "https://api.github.com/users/szmoro" }
[]
closed
false
null
[]
null
1
"2022-09-20T08:45:16Z"
"2022-09-20T16:11:27Z"
"2022-09-20T13:08:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4993.diff", "html_url": "https://github.com/huggingface/datasets/pull/4993", "merged_at": "2022-09-20T13:08:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/4993.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4993" }
This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4993/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4993/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4992/comments
https://api.github.com/repos/huggingface/datasets/issues/4992/events
https://github.com/huggingface/datasets/pull/4992
1,379,031,842
PR_kwDODunzps4_QVw4
4,992
Support streaming iwslt2017 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-20T08:35:41Z"
"2022-09-20T09:27:55Z"
"2022-09-20T09:15:24Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4992.diff", "html_url": "https://github.com/huggingface/datasets/pull/4992", "merged_at": "2022-09-20T09:15:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4992.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4992" }
Support streaming iwslt2017 dataset. Once this PR is merged: - [x] Remove old ".tgz" data files from the Hub.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4992/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4991/comments
https://api.github.com/repos/huggingface/datasets/issues/4991/events
https://github.com/huggingface/datasets/pull/4991
1,378,898,752
PR_kwDODunzps4_P5hI
4,991
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-20T06:42:07Z"
"2022-09-22T12:25:32Z"
"2022-09-20T07:37:30Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4991.diff", "html_url": "https://github.com/huggingface/datasets/pull/4991", "merged_at": "2022-09-20T07:37:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4991.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4991" }
Fix missing tags in dataset cards: - aeslc - empathetic_dialogues - event2Mind - gap - iwslt2017 - newsgroup - qa4mre - scicite This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931 - #4979
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4991/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4991/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4990/comments
https://api.github.com/repos/huggingface/datasets/issues/4990/events
https://github.com/huggingface/datasets/issues/4990
1,378,120,806
I_kwDODunzps5SJHRm
4,990
"no-token" is passed to `huggingface_hub` when token is `None`
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
6
"2022-09-19T15:14:40Z"
"2022-09-30T09:16:00Z"
"2022-09-30T09:16:00Z"
CONTRIBUTOR
null
null
null
## Describe the bug In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated. https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753 https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121 ## Expected results Pass `token=None` to `huggingface_hub`. ## Actual results `token="no-token"` is passed. ## Environment info `huggingface_hub v0.10.0dev`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4990/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4989/comments
https://api.github.com/repos/huggingface/datasets/issues/4989/events
https://github.com/huggingface/datasets/issues/4989
1,376,832,233
I_kwDODunzps5SEMrp
4,989
Running add_column() seems to corrupt existing sequence-type column info
{ "avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4", "events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}", "followers_url": "https://api.github.com/users/derek-rocheleau/followers", "following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}", "gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/derek-rocheleau", "id": 93728165, "login": "derek-rocheleau", "node_id": "U_kgDOBZYtpQ", "organizations_url": "https://api.github.com/users/derek-rocheleau/orgs", "received_events_url": "https://api.github.com/users/derek-rocheleau/received_events", "repos_url": "https://api.github.com/users/derek-rocheleau/repos", "site_admin": false, "starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions", "type": "User", "url": "https://api.github.com/users/derek-rocheleau" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2022-09-17T17:42:05Z"
"2022-09-19T12:54:54Z"
"2022-09-19T12:54:54Z"
NONE
null
null
null
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like: ds = load_dataset(...) df = ds.to_pandas() df: foo_0 | foo_1 | foo_2 | foo_3 0.0 | 1.0 | 2.0 | 3.0 If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be: ds = load_dataset(...) new_ds = ds.add_column("new_col", data) df = new_ds.to_pandas() df: foo | new_col [0.0, 1.0, 2.0, 3.0] | new_val I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4989/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
https://api.github.com/repos/huggingface/datasets/issues/4988/events
https://github.com/huggingface/datasets/issues/4988
1,376,096,584
I_kwDODunzps5SBZFI
4,988
Add `IterableDataset.from_generator` to the API
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hamid-vakilzadeh", "id": 56002455, "login": "hamid-vakilzadeh", "node_id": "MDQ6VXNlcjU2MDAyNDU1", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "type": "User", "url": "https://api.github.com/users/hamid-vakilzadeh" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hamid-vakilzadeh", "id": 56002455, "login": "hamid-vakilzadeh", "node_id": "MDQ6VXNlcjU2MDAyNDU1", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "type": "User", "url": "https://api.github.com/users/hamid-vakilzadeh" } ]
null
3
"2022-09-16T15:19:41Z"
"2022-10-05T12:10:49Z"
"2022-10-05T12:10:49Z"
CONTRIBUTOR
null
null
null
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4987/comments
https://api.github.com/repos/huggingface/datasets/issues/4987/events
https://github.com/huggingface/datasets/pull/4987
1,376,006,477
PR_kwDODunzps4_GlIu
4,987
Embed image/audio data in dl_and_prepare parquet
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-09-16T14:09:27Z"
"2022-09-16T16:24:47Z"
"2022-09-16T16:22:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4987.diff", "html_url": "https://github.com/huggingface/datasets/pull/4987", "merged_at": "2022-09-16T16:22:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/4987.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4987" }
Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file. Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4987/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4987/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4986/comments
https://api.github.com/repos/huggingface/datasets/issues/4986/events
https://github.com/huggingface/datasets/pull/4986
1,375,895,035
PR_kwDODunzps4_GNSd
4,986
[doc] Fix broken snippet that had too many quotes
{ "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tomaarsen", "id": 37621491, "login": "tomaarsen", "node_id": "MDQ6VXNlcjM3NjIxNDkx", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "repos_url": "https://api.github.com/users/tomaarsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "type": "User", "url": "https://api.github.com/users/tomaarsen" }
[]
closed
false
null
[]
null
2
"2022-09-16T12:41:07Z"
"2022-09-16T22:12:21Z"
"2022-09-16T17:32:14Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4986.diff", "html_url": "https://github.com/huggingface/datasets/pull/4986", "merged_at": "2022-09-16T17:32:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4986.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4986" }
Hello! ### Pull request overview * Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes ### Details The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly: ![image](https://user-images.githubusercontent.com/37621491/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png) The change speaks for itself. Thank you for the detailed documentation, by the way. - Tom Aarsen
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4986/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4985/comments
https://api.github.com/repos/huggingface/datasets/issues/4985/events
https://github.com/huggingface/datasets/pull/4985
1,375,807,768
PR_kwDODunzps4_F6kU
4,985
Prefer split patterns from directories over split patterns from filenames
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
4
"2022-09-16T11:20:40Z"
"2022-11-02T11:54:28Z"
"2022-09-29T08:07:49Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4985.diff", "html_url": "https://github.com/huggingface/datasets/pull/4985", "merged_at": "2022-09-29T08:07:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4985.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4985" }
related to https://github.com/huggingface/datasets/issues/4895
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4985/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4985/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4984/comments
https://api.github.com/repos/huggingface/datasets/issues/4984/events
https://github.com/huggingface/datasets/pull/4984
1,375,690,330
PR_kwDODunzps4_FhTm
4,984
docs: ✏️ add links to the Datasets API
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
2
"2022-09-16T09:34:12Z"
"2022-09-16T13:10:14Z"
"2022-09-16T13:07:33Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4984.diff", "html_url": "https://github.com/huggingface/datasets/pull/4984", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4984.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4984" }
I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs. I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4984/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4983/comments
https://api.github.com/repos/huggingface/datasets/issues/4983/events
https://github.com/huggingface/datasets/issues/4983
1,375,667,654
I_kwDODunzps5R_wXG
4,983
How to convert torch.utils.data.Dataset to huggingface dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4", "events_url": "https://api.github.com/users/DEROOCE/events{/privacy}", "followers_url": "https://api.github.com/users/DEROOCE/followers", "following_url": "https://api.github.com/users/DEROOCE/following{/other_user}", "gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DEROOCE", "id": 77595952, "login": "DEROOCE", "node_id": "MDQ6VXNlcjc3NTk1OTUy", "organizations_url": "https://api.github.com/users/DEROOCE/orgs", "received_events_url": "https://api.github.com/users/DEROOCE/received_events", "repos_url": "https://api.github.com/users/DEROOCE/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions", "type": "User", "url": "https://api.github.com/users/DEROOCE" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
15
"2022-09-16T09:15:10Z"
"2023-12-14T20:54:15Z"
"2022-09-20T11:23:43Z"
NONE
null
null
null
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below: ```python from datasets import Dataset data = [[1, 2],[3, 4]] ds = Dataset.from_dict({"data": data}) ds = ds.with_format("torch") ds[0] ds[:2] ``` So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4983/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4982/comments
https://api.github.com/repos/huggingface/datasets/issues/4982/events
https://github.com/huggingface/datasets/issues/4982
1,375,604,693
I_kwDODunzps5R_g_V
4,982
Create dataset_infos.json with VALIDATION and TEST splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4", "events_url": "https://api.github.com/users/skalinin/events{/privacy}", "followers_url": "https://api.github.com/users/skalinin/followers", "following_url": "https://api.github.com/users/skalinin/following{/other_user}", "gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/skalinin", "id": 26695348, "login": "skalinin", "node_id": "MDQ6VXNlcjI2Njk1MzQ4", "organizations_url": "https://api.github.com/users/skalinin/orgs", "received_events_url": "https://api.github.com/users/skalinin/received_events", "repos_url": "https://api.github.com/users/skalinin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skalinin/subscriptions", "type": "User", "url": "https://api.github.com/users/skalinin" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
3
"2022-09-16T08:21:19Z"
"2022-09-28T07:59:39Z"
"2022-09-28T07:59:39Z"
NONE
null
null
null
The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569). > When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error: > ValueError: Unknown split "test". Should be one of ['train']. > > The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN > > You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch) I tried to clear the cache folder, than I got an another error. I run: ``` git clone https://huggingface.co/datasets/sberbank-ai/Peter cd Peter git checkout add_splits # switch to a add_splits branch rm dataset_infos.json # remove local dataset_infos.json rm -r ~/.cache/huggingface # remove cached dataset_infos.json datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json ``` The error message: ``` Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s] Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run builder.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators data_files = dl_manager.download_and_extract(_URLS) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract extracted_paths = map_nested( File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested mapped = [ File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path output_path = ExtractManager(cache_dir=download_config.cache_dir).extract( File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract self.extractor.extract(input_path, output_path, extractor_format) File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract with FileLock(lock_path): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__ max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax FileNotFoundError: [Errno 2] No such file or directory: '' Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10> Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__ self.release(force=True) File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release with self._thread_lock: AttributeError: 'UnixFileLock' object has no attribute '_thread_lock' Extracting data files: 0%| | 0/4 [00:00<?, ?it/s] ``` Can you help me please? ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4982/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4981/comments
https://api.github.com/repos/huggingface/datasets/issues/4981/events
https://github.com/huggingface/datasets/issues/4981
1,375,086,773
I_kwDODunzps5R9ii1
4,981
Can't create a dataset with `float16` features
{ "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dconathan", "id": 15098095, "login": "dconathan", "node_id": "MDQ6VXNlcjE1MDk4MDk1", "organizations_url": "https://api.github.com/users/dconathan/orgs", "received_events_url": "https://api.github.com/users/dconathan/received_events", "repos_url": "https://api.github.com/users/dconathan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "type": "User", "url": "https://api.github.com/users/dconathan" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
7
"2022-09-15T21:03:24Z"
"2023-03-22T21:40:09Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4981/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4980/comments
https://api.github.com/repos/huggingface/datasets/issues/4980/events
https://github.com/huggingface/datasets/issues/4980
1,374,868,083
I_kwDODunzps5R8tJz
4,980
Make `pyarrow` optional
{ "avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4", "events_url": "https://api.github.com/users/KOLANICH/events{/privacy}", "followers_url": "https://api.github.com/users/KOLANICH/followers", "following_url": "https://api.github.com/users/KOLANICH/following{/other_user}", "gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KOLANICH", "id": 240344, "login": "KOLANICH", "node_id": "MDQ6VXNlcjI0MDM0NA==", "organizations_url": "https://api.github.com/users/KOLANICH/orgs", "received_events_url": "https://api.github.com/users/KOLANICH/received_events", "repos_url": "https://api.github.com/users/KOLANICH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions", "type": "User", "url": "https://api.github.com/users/KOLANICH" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
3
"2022-09-15T17:38:03Z"
"2022-09-16T17:23:47Z"
"2022-09-16T17:23:47Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4980/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4979/comments
https://api.github.com/repos/huggingface/datasets/issues/4979/events
https://github.com/huggingface/datasets/pull/4979
1,374,820,758
PR_kwDODunzps4_CouM
4,979
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-15T16:51:03Z"
"2022-09-22T12:37:55Z"
"2022-09-15T17:12:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4979.diff", "html_url": "https://github.com/huggingface/datasets/pull/4979", "merged_at": "2022-09-15T17:12:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4979.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4979" }
Fix missing tags in dataset cards: - amazon_us_reviews - art - discofuse - indic_glue - ubuntu_dialogs_corpus This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4979/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4979/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4978/comments
https://api.github.com/repos/huggingface/datasets/issues/4978/events
https://github.com/huggingface/datasets/pull/4978
1,374,271,504
PR_kwDODunzps4_Axnh
4,978
Update IndicGLUE download links
{ "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sumanthd17", "id": 28291870, "login": "sumanthd17", "node_id": "MDQ6VXNlcjI4MjkxODcw", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "repos_url": "https://api.github.com/users/sumanthd17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "type": "User", "url": "https://api.github.com/users/sumanthd17" }
[]
closed
false
null
[]
null
1
"2022-09-15T10:05:57Z"
"2022-09-15T22:00:20Z"
"2022-09-15T21:57:34Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4978.diff", "html_url": "https://github.com/huggingface/datasets/pull/4978", "merged_at": "2022-09-15T21:57:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4978.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4978" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4978/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4978/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4977/comments
https://api.github.com/repos/huggingface/datasets/issues/4977/events
https://github.com/huggingface/datasets/issues/4977
1,372,962,157
I_kwDODunzps5R1b1t
4,977
Providing dataset size
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
3
"2022-09-14T13:09:27Z"
"2022-09-15T16:03:58Z"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution you'd like** Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some). **Describe alternatives you've considered** People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: **Additional context** Mentioned to @lhoestq
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4977/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4976/comments
https://api.github.com/repos/huggingface/datasets/issues/4976/events
https://github.com/huggingface/datasets/issues/4976
1,372,322,382
I_kwDODunzps5Ry_pO
4,976
Hope to adapt Python3.9 as soon as possible
{ "avatar_url": "https://avatars.githubusercontent.com/u/74012141?v=4", "events_url": "https://api.github.com/users/RedHeartSecretMan/events{/privacy}", "followers_url": "https://api.github.com/users/RedHeartSecretMan/followers", "following_url": "https://api.github.com/users/RedHeartSecretMan/following{/other_user}", "gists_url": "https://api.github.com/users/RedHeartSecretMan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RedHeartSecretMan", "id": 74012141, "login": "RedHeartSecretMan", "node_id": "MDQ6VXNlcjc0MDEyMTQx", "organizations_url": "https://api.github.com/users/RedHeartSecretMan/orgs", "received_events_url": "https://api.github.com/users/RedHeartSecretMan/received_events", "repos_url": "https://api.github.com/users/RedHeartSecretMan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RedHeartSecretMan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RedHeartSecretMan/subscriptions", "type": "User", "url": "https://api.github.com/users/RedHeartSecretMan" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
3
"2022-09-14T04:42:22Z"
"2022-09-26T16:32:35Z"
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4976/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4976/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4975/comments
https://api.github.com/repos/huggingface/datasets/issues/4975/events
https://github.com/huggingface/datasets/pull/4975
1,371,703,691
PR_kwDODunzps4-4NXX
4,975
Add `fn_kwargs` param to `IterableDataset.map`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
"2022-09-13T16:19:05Z"
"2023-05-05T16:53:43Z"
"2022-09-13T16:45:34Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4975.diff", "html_url": "https://github.com/huggingface/datasets/pull/4975", "merged_at": "2022-09-13T16:45:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4975.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4975" }
Add the `fn_kwargs` parameter to `IterableDataset.map`. ("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4975/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4975/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4974/comments
https://api.github.com/repos/huggingface/datasets/issues/4974/events
https://github.com/huggingface/datasets/pull/4974
1,371,682,020
PR_kwDODunzps4-4Iri
4,974
[GH->HF] Part 2: Remove all dataset scripts from github
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
6
"2022-09-13T16:01:12Z"
"2022-10-03T17:09:39Z"
"2022-10-03T17:07:32Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4974.diff", "html_url": "https://github.com/huggingface/datasets/pull/4974", "merged_at": "2022-10-03T17:07:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/4974.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4974" }
Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository - [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first - [x] and PR to be enabled on the Hub for non-namespaced datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4974/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4974/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4973/comments
https://api.github.com/repos/huggingface/datasets/issues/4973/events
https://github.com/huggingface/datasets/pull/4973
1,371,600,074
PR_kwDODunzps4-33JW
4,973
[GH->HF] Load datasets from the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2022-09-13T15:01:41Z"
"2023-09-24T10:06:02Z"
"2022-09-15T15:24:26Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4973.diff", "html_url": "https://github.com/huggingface/datasets/pull/4973", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4973.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4973" }
Currently datasets with no namespace (e.g. squad, glue) are loaded from github. In this PR I changed this logic to use the Hugging Face Hub instead. This is the first step in removing all the dataset scripts in this repository related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4973/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4972/comments
https://api.github.com/repos/huggingface/datasets/issues/4972/events
https://github.com/huggingface/datasets/pull/4972
1,371,443,306
PR_kwDODunzps4-3VVF
4,972
Fix map batched with torch output
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-09-13T13:16:34Z"
"2022-09-20T09:42:02Z"
"2022-09-20T09:39:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4972.diff", "html_url": "https://github.com/huggingface/datasets/pull/4972", "merged_at": "2022-09-20T09:39:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4972.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4972" }
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2 Currently it fails if one uses batched `map` and the map function returns a torch tensor. I fixed it for torch, tf, jax and pandas series.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4972/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4971/comments
https://api.github.com/repos/huggingface/datasets/issues/4971/events
https://github.com/huggingface/datasets/pull/4971
1,370,319,516
PR_kwDODunzps4-zk3g
4,971
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-09-12T18:08:24Z"
"2022-09-13T13:51:08Z"
"2022-09-13T13:48:45Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4971.diff", "html_url": "https://github.com/huggingface/datasets/pull/4971", "merged_at": "2022-09-13T13:48:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4971.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4971" }
Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform. This makes the behavior inconsistent with `IterableDataset.map`. (It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246) Fix https://github.com/huggingface/datasets/issues/4858
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4971/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4970/comments
https://api.github.com/repos/huggingface/datasets/issues/4970/events
https://github.com/huggingface/datasets/pull/4970
1,369,433,074
PR_kwDODunzps4-wkY2
4,970
Support streaming nli_tr dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-12T07:48:45Z"
"2022-09-12T08:45:04Z"
"2022-09-12T08:43:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4970.diff", "html_url": "https://github.com/huggingface/datasets/pull/4970", "merged_at": "2022-09-12T08:43:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/4970.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4970" }
Support streaming nli_tr dataset. This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding. Fix #3186.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4970/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4970/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4969/comments
https://api.github.com/repos/huggingface/datasets/issues/4969/events
https://github.com/huggingface/datasets/pull/4969
1,369,334,740
PR_kwDODunzps4-wPOk
4,969
Fix data URL and metadata of vivos dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-12T06:12:34Z"
"2022-09-12T07:16:15Z"
"2022-09-12T07:14:19Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4969.diff", "html_url": "https://github.com/huggingface/datasets/pull/4969", "merged_at": "2022-09-12T07:14:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/4969.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4969" }
After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130 This PR updates their data URL and some metadata (homepage, citation and license). Fix #4936.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4969/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4968/comments
https://api.github.com/repos/huggingface/datasets/issues/4968/events
https://github.com/huggingface/datasets/pull/4968
1,369,312,877
PR_kwDODunzps4-wKkw
4,968
Support streaming compguesswhat dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-12T05:42:24Z"
"2022-09-12T08:00:06Z"
"2022-09-12T07:58:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4968.diff", "html_url": "https://github.com/huggingface/datasets/pull/4968", "merged_at": "2022-09-12T07:58:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/4968.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4968" }
Support streaming `compguesswhat` dataset. Fix #3191.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4968/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4967/comments
https://api.github.com/repos/huggingface/datasets/issues/4967/events
https://github.com/huggingface/datasets/pull/4967
1,369,092,452
PR_kwDODunzps4-vbS-
4,967
Strip "/" in local dataset path to avoid empty dataset name error
{ "avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4", "events_url": "https://api.github.com/users/apohllo/events{/privacy}", "followers_url": "https://api.github.com/users/apohllo/followers", "following_url": "https://api.github.com/users/apohllo/following{/other_user}", "gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apohllo", "id": 40543, "login": "apohllo", "node_id": "MDQ6VXNlcjQwNTQz", "organizations_url": "https://api.github.com/users/apohllo/orgs", "received_events_url": "https://api.github.com/users/apohllo/received_events", "repos_url": "https://api.github.com/users/apohllo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apohllo/subscriptions", "type": "User", "url": "https://api.github.com/users/apohllo" }
[]
closed
false
null
[]
null
2
"2022-09-11T23:09:16Z"
"2022-09-29T10:46:21Z"
"2022-09-12T15:30:38Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4967.diff", "html_url": "https://github.com/huggingface/datasets/pull/4967", "merged_at": "2022-09-12T15:30:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/4967.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4967" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4967/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4965/comments
https://api.github.com/repos/huggingface/datasets/issues/4965/events
https://github.com/huggingface/datasets/issues/4965
1,368,661,002
I_kwDODunzps5RlBwK
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
{ "avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4", "events_url": "https://api.github.com/users/hoangtnm/events{/privacy}", "followers_url": "https://api.github.com/users/hoangtnm/followers", "following_url": "https://api.github.com/users/hoangtnm/following{/other_user}", "gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hoangtnm", "id": 35718590, "login": "hoangtnm", "node_id": "MDQ6VXNlcjM1NzE4NTkw", "organizations_url": "https://api.github.com/users/hoangtnm/orgs", "received_events_url": "https://api.github.com/users/hoangtnm/received_events", "repos_url": "https://api.github.com/users/hoangtnm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions", "type": "User", "url": "https://api.github.com/users/hoangtnm" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
5
"2022-09-10T15:55:49Z"
"2024-01-12T14:37:32Z"
"2023-07-21T14:45:50Z"
NONE
null
null
null
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])}) dataset = dataset.cast_column("audio", Audio()) dataset[0] ``` ## Expected results ``` {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Actual results ````--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 dataset[0] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key) 2163 def __getitem__(self, key): # noqa: F811 2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2165 return self._getitem( 2166 key, 2167 ) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs) 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2150 formatted_output = format_table( 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2152 ) 2153 return formatted_output File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ -> 1647 return { 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ 1647 return { -> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id) 1257 # Object with special decoding: 1258 elif isinstance(schema, (Audio, Image)): 1259 # we pass the token to read and decode files from private repositories in streaming mode -> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None 1261 return obj File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id) 154 array, sampling_rate = self._decode_non_mp3_file_like(file) 155 else: --> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) 157 return {"path": path, "array": array, "sampling_rate": sampling_rate} File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id) 254 use_auth_token = None 256 with xopen(path, "rb", use_auth_token=use_auth_token) as f: --> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 258 return array, sampling_rate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 86 extra_args = len(args) - len(all_args) 87 if extra_args <= 0: ---> 88 return f(*args, **kwargs) 90 # extra_args > 0 91 args_msg = [ 92 "{}={}".format(name, arg) 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:]) 94 ] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type) 161 else: 162 # Otherwise try soundfile first, and then fall back if necessary 163 try: --> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype) 166 except RuntimeError as exc: 167 # If soundfile failed, try audioread instead 168 if isinstance(path, (str, pathlib.PurePath)): File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype) 192 context = path 193 else: 194 # Otherwise, create the soundfile object --> 195 context = sf.SoundFile(path) 197 with context as sf_desc: 198 sr_native = sf_desc.samplerate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 626 self._mode = mode 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) --> 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) 632 self.seek(0) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd) 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd) 1178 elif _has_virtual_io_attrs(file, mode_int): -> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file), 1180 mode_int, self._info, _ffi.NULL) 1181 else: 1182 raise TypeError("Invalid file: {0!r}".format(self.name)) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file) 1194 def _init_virtual_io(self, file): 1195 """Initialize callback functions for sf_open_virtual().""" 1196 @_ffi.callback("sf_vio_get_filelen") -> 1197 def vio_get_filelen(user_data): 1198 curr = file.tell() 1199 file.seek(0, SEEK_END) MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4965/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4964/comments
https://api.github.com/repos/huggingface/datasets/issues/4964/events
https://github.com/huggingface/datasets/issues/4964
1,368,617,322
I_kwDODunzps5Rk3Fq
4,964
Column of arrays (2D+) are using unreasonably high memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4", "events_url": "https://api.github.com/users/vigsterkr/events{/privacy}", "followers_url": "https://api.github.com/users/vigsterkr/followers", "following_url": "https://api.github.com/users/vigsterkr/following{/other_user}", "gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vigsterkr", "id": 30353, "login": "vigsterkr", "node_id": "MDQ6VXNlcjMwMzUz", "organizations_url": "https://api.github.com/users/vigsterkr/orgs", "received_events_url": "https://api.github.com/users/vigsterkr/received_events", "repos_url": "https://api.github.com/users/vigsterkr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions", "type": "User", "url": "https://api.github.com/users/vigsterkr" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
10
"2022-09-10T13:07:22Z"
"2022-09-22T18:29:22Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage. ## Steps to reproduce the bug ```python from datasets import Dataset, Features, Array2D, Array3D import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")})) ``` the code above will use about 10Gb of RAM while constructing the `dataset` object. The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column. ```python from datasets import Dataset import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}) dataset[column_name] ``` ## Expected results Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening. ## Actual results Enormous memory- and runtime overhead. ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4964/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4963/comments
https://api.github.com/repos/huggingface/datasets/issues/4963/events
https://github.com/huggingface/datasets/issues/4963
1,368,201,188
I_kwDODunzps5RjRfk
4,963
Dataset without script does not support regular JSON data file
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
1
"2022-09-09T18:45:33Z"
"2022-09-20T15:40:07Z"
"2022-09-20T15:40:07Z"
MEMBER
null
null
null
### Link https://huggingface.co/datasets/julien-c/label-studio-my-dogs ### Description <img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png"> ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4963/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4962/comments
https://api.github.com/repos/huggingface/datasets/issues/4962/events
https://github.com/huggingface/datasets/pull/4962
1,368,155,365
PR_kwDODunzps4-sh-o
4,962
Update setup.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DCNemesis", "id": 3616964, "login": "DCNemesis", "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "repos_url": "https://api.github.com/users/DCNemesis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "type": "User", "url": "https://api.github.com/users/DCNemesis" }
[]
closed
false
null
[]
null
2
"2022-09-09T17:57:56Z"
"2022-09-12T14:33:04Z"
"2022-09-12T14:33:04Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4962.diff", "html_url": "https://github.com/huggingface/datasets/pull/4962", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4962.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4962" }
exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4962/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4961/comments
https://api.github.com/repos/huggingface/datasets/issues/4961/events
https://github.com/huggingface/datasets/issues/4961
1,368,124,033
I_kwDODunzps5Ri-qB
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DCNemesis", "id": 3616964, "login": "DCNemesis", "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "repos_url": "https://api.github.com/users/DCNemesis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "type": "User", "url": "https://api.github.com/users/DCNemesis" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
6
"2022-09-09T17:26:55Z"
"2022-09-12T17:45:50Z"
"2022-09-12T14:32:05Z"
NONE
null
null
null
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ## Expected results Dataset should load as iterator. ## Actual results ``` [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1737 # Return iterable dataset in case of streaming 1738 if streaming: -> 1739 return builder_instance.as_streaming_dataset(split=split) 1740 1741 # Some datasets are already processed on the HF google storage [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1023 ) 1024 self._check_manual_download(dl_manager) -> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 1026 # By default, return all splits 1027 if split is None: [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split) 267 # for streaming case 268 def _download_audio_archives(dl_manager, lang, format, split): --> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split) 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths] [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split) 251 n_files_path = dl_manager.download(n_files_url) 252 --> 253 with open(n_files_path, "r", encoding="utf-8") as file: 254 n_files = int(file.read().strip()) # the file contains a number of archives 255 ValueError: I/O operation on closed file. ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4961/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4960/comments
https://api.github.com/repos/huggingface/datasets/issues/4960/events
https://github.com/huggingface/datasets/issues/4960
1,368,035,159
I_kwDODunzps5Rio9X
4,960
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4", "events_url": "https://api.github.com/users/DSLituiev/events{/privacy}", "followers_url": "https://api.github.com/users/DSLituiev/followers", "following_url": "https://api.github.com/users/DSLituiev/following{/other_user}", "gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DSLituiev", "id": 8426290, "login": "DSLituiev", "node_id": "MDQ6VXNlcjg0MjYyOTA=", "organizations_url": "https://api.github.com/users/DSLituiev/orgs", "received_events_url": "https://api.github.com/users/DSLituiev/received_events", "repos_url": "https://api.github.com/users/DSLituiev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions", "type": "User", "url": "https://api.github.com/users/DSLituiev" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
null
[]
null
2
"2022-09-09T16:06:43Z"
"2022-09-13T08:51:03Z"
null
NONE
null
null
null
## Describe the bug I am trying to load a dataset from drive and running into an error. ## Steps to reproduce the bug ```python data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) ``` ## Actual results `AttributeError: 'BuilderConfig' object has no attribute 'schema'` <details> ``` Using custom data configuration default-a1ca3e05be5abf2f --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [8], in <cell line: 2>() 1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" ----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1720 ignore_verifications = ignore_verifications or save_infos 1722 # Create a dataset builder -> 1723 builder_instance = load_dataset_builder( 1724 path=path, 1725 name=name, 1726 data_dir=data_dir, 1727 data_files=data_files, 1728 cache_dir=cache_dir, 1729 features=features, 1730 download_config=download_config, 1731 download_mode=download_mode, 1732 revision=revision, 1733 use_auth_token=use_auth_token, 1734 **config_kwargs, 1735 ) 1737 # Return iterable dataset in case of streaming 1738 if streaming: File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1523 raise ValueError(error_msg) 1525 # Instantiate the dataset builder -> 1526 builder_instance: DatasetBuilder = builder_cls( 1527 cache_dir=cache_dir, 1528 config_name=config_name, 1529 data_dir=data_dir, 1530 data_files=data_files, 1531 hash=hash, 1532 features=features, 1533 use_auth_token=use_auth_token, 1534 **builder_kwargs, 1535 **config_kwargs, 1536 ) 1538 return builder_instance File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1153 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1154 super().__init__(*args, **kwargs) 1155 # Batch size used by the ArrowWriter 1156 # It defines the number of samples that are kept in memory before writing them 1157 # and also the length of the arrow chunks 1158 # None means that the ArrowWriter will use its default value 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 305 if info is None: 306 info = self.get_exported_dataset_info() --> 307 info.update(self._info()) 308 info.builder_name = self.name 309 info.config_name = self.config.name File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self) 474 def _info(self): 475 476 # BioASQ Task B source schema --> 477 if self.config.schema == "source": 478 features = datasets.Features( 479 { 480 "id": datasets.Value("string"), (...) 504 } 505 ) 506 # simplified schema for QA tasks AttributeError: 'BuilderConfig' object has no attribute 'schema' ``` </details> ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4960/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4959/comments
https://api.github.com/repos/huggingface/datasets/issues/4959/events
https://github.com/huggingface/datasets/pull/4959
1,367,924,429
PR_kwDODunzps4-rx6l
4,959
Fix data URLs of compguesswhat dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-09T14:36:10Z"
"2022-09-09T16:01:34Z"
"2022-09-09T15:59:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4959.diff", "html_url": "https://github.com/huggingface/datasets/pull/4959", "merged_at": "2022-09-09T15:59:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4959.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4959" }
After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1 This PR updates their data URLs in our loading script. Related to: - #3191
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4959/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4958/comments
https://api.github.com/repos/huggingface/datasets/issues/4958/events
https://github.com/huggingface/datasets/issues/4958
1,367,695,376
I_kwDODunzps5RhWAQ
4,958
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4", "events_url": "https://api.github.com/users/hasakikiki/events{/privacy}", "followers_url": "https://api.github.com/users/hasakikiki/followers", "following_url": "https://api.github.com/users/hasakikiki/following{/other_user}", "gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hasakikiki", "id": 66322047, "login": "hasakikiki", "node_id": "MDQ6VXNlcjY2MzIyMDQ3", "organizations_url": "https://api.github.com/users/hasakikiki/orgs", "received_events_url": "https://api.github.com/users/hasakikiki/received_events", "repos_url": "https://api.github.com/users/hasakikiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions", "type": "User", "url": "https://api.github.com/users/hasakikiki" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2022-09-09T11:29:55Z"
"2022-09-09T11:38:44Z"
"2022-09-09T11:38:44Z"
NONE
null
null
null
Hi, When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version. ``` ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4958/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4957/comments
https://api.github.com/repos/huggingface/datasets/issues/4957/events
https://github.com/huggingface/datasets/pull/4957
1,366,532,849
PR_kwDODunzps4-nGIk
4,957
Add `Dataset.from_generator`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2022-09-08T15:08:25Z"
"2022-09-16T14:46:35Z"
"2022-09-16T14:44:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4957.diff", "html_url": "https://github.com/huggingface/datasets/pull/4957", "merged_at": "2022-09-16T14:44:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4957.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4957" }
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism. Closes https://github.com/huggingface/datasets/issues/4417
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4957/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4956/comments
https://api.github.com/repos/huggingface/datasets/issues/4956/events
https://github.com/huggingface/datasets/pull/4956
1,366,475,160
PR_kwDODunzps4-m5NU
4,956
Fix TF tests for 2.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[]
closed
false
null
[]
null
1
"2022-09-08T14:39:10Z"
"2022-09-08T15:16:51Z"
"2022-09-08T15:14:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4956.diff", "html_url": "https://github.com/huggingface/datasets/pull/4956", "merged_at": "2022-09-08T15:14:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4956.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4956" }
Fixes #4953
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4956/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4955/comments
https://api.github.com/repos/huggingface/datasets/issues/4955/events
https://github.com/huggingface/datasets/issues/4955
1,366,382,314
I_kwDODunzps5RcVbq
4,955
Raise a more precise error when the URL is unreachable in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2022-09-08T13:52:37Z"
"2022-09-08T13:53:36Z"
null
CONTRIBUTOR
null
null
null
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat <img width="1029" alt="Capture d’écran 2022-09-08 à 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png"> - https://huggingface.co/datasets/nli_tr <img width="1032" alt="Capture d’écran 2022-09-08 à 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png"> cc @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4955/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4954/comments
https://api.github.com/repos/huggingface/datasets/issues/4954/events
https://github.com/huggingface/datasets/pull/4954
1,366,369,682
PR_kwDODunzps4-mhl5
4,954
Pin TensorFlow temporarily
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-08T13:46:15Z"
"2022-09-08T14:12:33Z"
"2022-09-08T14:10:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4954.diff", "html_url": "https://github.com/huggingface/datasets/pull/4954", "merged_at": "2022-09-08T14:10:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/4954.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4954" }
Temporarily fix TensorFlow until a permanent solution is found. Related to: - #4953
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4954/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4954/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4953/comments
https://api.github.com/repos/huggingface/datasets/issues/4953/events
https://github.com/huggingface/datasets/issues/4953
1,366,356,514
I_kwDODunzps5RcPIi
4,953
CI test of TensorFlow is failing
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2022-09-08T13:39:29Z"
"2022-09-08T15:14:45Z"
"2022-09-08T15:14:45Z"
MEMBER
null
null
null
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers def gen_random_output(): model = layers.Dense(2) x = tf.random.uniform((1, 3)) return model(x).numpy() with temp_seed(42, set_tensorflow=True): out1 = gen_random_output() with temp_seed(42, set_tensorflow=True): out2 = gen_random_output() out3 = gen_random_output() > np.testing.assert_equal(out1, out2) E AssertionError: E Arrays are not equal E E Mismatched elements: 2 / 2 (100%) E Max absolute difference: 0.84619296 E Max relative difference: 16.083529 E x: array([[-0.793581, 0.333286]], dtype=float32) E y: array([[0.052612, 0.539708]], dtype=float32) tests/test_py_utils.py:149: AssertionError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4953/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4952/comments
https://api.github.com/repos/huggingface/datasets/issues/4952/events
https://github.com/huggingface/datasets/pull/4952
1,366,354,604
PR_kwDODunzps4-meM0
4,952
Add test-datasets CI job
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
"2022-09-08T13:38:30Z"
"2023-09-24T10:05:57Z"
"2022-09-16T13:25:48Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4952.diff", "html_url": "https://github.com/huggingface/datasets/pull/4952", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4952.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4952" }
To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts This also makes `pip install -e .[dev]` much smaller for developers WDYT @albertvillanova ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4952/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4952/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4951/comments
https://api.github.com/repos/huggingface/datasets/issues/4951/events
https://github.com/huggingface/datasets/pull/4951
1,365,954,814
PR_kwDODunzps4-lDqd
4,951
Fix license information in qasc dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-08T10:04:39Z"
"2022-09-08T14:54:47Z"
"2022-09-08T14:52:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4951.diff", "html_url": "https://github.com/huggingface/datasets/pull/4951", "merged_at": "2022-09-08T14:52:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/4951.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4951" }
This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0: - https://github.com/allenai/qasc/issues/5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4951/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4951/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4950/comments
https://api.github.com/repos/huggingface/datasets/issues/4950/events
https://github.com/huggingface/datasets/pull/4950
1,365,458,633
PR_kwDODunzps4-jWZ1
4,950
Update Enwik8 broken link and information
{ "avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4", "events_url": "https://api.github.com/users/mtanghu/events{/privacy}", "followers_url": "https://api.github.com/users/mtanghu/followers", "following_url": "https://api.github.com/users/mtanghu/following{/other_user}", "gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mtanghu", "id": 54819091, "login": "mtanghu", "node_id": "MDQ6VXNlcjU0ODE5MDkx", "organizations_url": "https://api.github.com/users/mtanghu/orgs", "received_events_url": "https://api.github.com/users/mtanghu/received_events", "repos_url": "https://api.github.com/users/mtanghu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions", "type": "User", "url": "https://api.github.com/users/mtanghu" }
[]
closed
false
null
[]
null
1
"2022-09-08T03:15:00Z"
"2022-09-24T22:14:35Z"
"2022-09-08T14:51:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4950.diff", "html_url": "https://github.com/huggingface/datasets/pull/4950", "merged_at": "2022-09-08T14:51:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4950.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4950" }
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4950/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4949/comments
https://api.github.com/repos/huggingface/datasets/issues/4949/events
https://github.com/huggingface/datasets/pull/4949
1,365,251,916
PR_kwDODunzps4-iqzI
4,949
Update enwik8 fixing the broken link
{ "avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4", "events_url": "https://api.github.com/users/mtanghu/events{/privacy}", "followers_url": "https://api.github.com/users/mtanghu/followers", "following_url": "https://api.github.com/users/mtanghu/following{/other_user}", "gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mtanghu", "id": 54819091, "login": "mtanghu", "node_id": "MDQ6VXNlcjU0ODE5MDkx", "organizations_url": "https://api.github.com/users/mtanghu/orgs", "received_events_url": "https://api.github.com/users/mtanghu/received_events", "repos_url": "https://api.github.com/users/mtanghu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions", "type": "User", "url": "https://api.github.com/users/mtanghu" }
[]
closed
false
null
[]
null
1
"2022-09-07T22:17:14Z"
"2022-09-08T03:14:04Z"
"2022-09-08T03:14:04Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4949.diff", "html_url": "https://github.com/huggingface/datasets/pull/4949", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4949.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4949" }
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4949/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4949/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4948/comments
https://api.github.com/repos/huggingface/datasets/issues/4948/events
https://github.com/huggingface/datasets/pull/4948
1,364,973,778
PR_kwDODunzps4-hwsl
4,948
Fix minor typo in error message for missing imports
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2022-09-07T17:20:51Z"
"2022-09-08T14:59:31Z"
"2022-09-08T14:57:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4948.diff", "html_url": "https://github.com/huggingface/datasets/pull/4948", "merged_at": "2022-09-08T14:57:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4948.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4948" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4948/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4948/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4947/comments
https://api.github.com/repos/huggingface/datasets/issues/4947/events
https://github.com/huggingface/datasets/pull/4947
1,364,967,957
PR_kwDODunzps4-hvbq
4,947
Try to fix the Windows CI after TF update 2.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-09-07T17:14:49Z"
"2023-09-24T10:05:38Z"
"2022-09-08T09:13:10Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4947.diff", "html_url": "https://github.com/huggingface/datasets/pull/4947", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4947" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4947/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4946/comments
https://api.github.com/repos/huggingface/datasets/issues/4946/events
https://github.com/huggingface/datasets/pull/4946
1,364,692,069
PR_kwDODunzps4-g0Hz
4,946
Introduce regex check when pushing as well
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[]
closed
false
null
[]
null
2
"2022-09-07T13:45:58Z"
"2022-09-13T10:19:01Z"
"2022-09-13T10:16:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4946.diff", "html_url": "https://github.com/huggingface/datasets/pull/4946", "merged_at": "2022-09-13T10:16:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4946.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4946" }
Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub. Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4946/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4946/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4945/comments
https://api.github.com/repos/huggingface/datasets/issues/4945/events
https://github.com/huggingface/datasets/issues/4945
1,364,691,096
I_kwDODunzps5RV4iY
4,945
Push to hub can push splits that do not respect the regex
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2022-09-07T13:45:17Z"
"2022-09-13T10:16:35Z"
"2022-09-13T10:16:35Z"
MEMBER
null
null
null
## Describe the bug The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing. ## Steps to reproduce the bug ```python >>> from datasets import Dataset, DatasetDict, load_dataset >>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]}) >>> di = DatasetDict() >>> di['identifier-with-column'] = d >>> di.push_to_hub('open-source-metrics/test') Pushing split identifier-with-column to the Hub. Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it] ``` Loading it afterwards: ```python >>> load_dataset('open-source-metrics/test') Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s] Using custom data configuration open-source-metrics--test-28b63ec7cde80488 Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s] Traceback (most recent call last): File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files})) File "<string>", line 5, in __init__ File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__ NamedSplit(self.name) # check that it's a valid split name File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__ raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.") ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'. ``` ## Expected results I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards. ## Actual results See above ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4945/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4944/comments
https://api.github.com/repos/huggingface/datasets/issues/4944/events
https://github.com/huggingface/datasets/issues/4944
1,364,313,569
I_kwDODunzps5RUcXh
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
{ "avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4", "events_url": "https://api.github.com/users/debby1103/events{/privacy}", "followers_url": "https://api.github.com/users/debby1103/followers", "following_url": "https://api.github.com/users/debby1103/following{/other_user}", "gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/debby1103", "id": 38886373, "login": "debby1103", "node_id": "MDQ6VXNlcjM4ODg2Mzcz", "organizations_url": "https://api.github.com/users/debby1103/orgs", "received_events_url": "https://api.github.com/users/debby1103/received_events", "repos_url": "https://api.github.com/users/debby1103/repos", "site_admin": false, "starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/debby1103/subscriptions", "type": "User", "url": "https://api.github.com/users/debby1103" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2022-09-07T08:46:30Z"
"2022-09-07T12:34:58Z"
"2022-09-07T12:34:58Z"
NONE
null
null
null
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1 trainer = QuestionAnsweringTrainer( #huggingface trainer model=model, args=training_args, train_dataset=train_ds, eval_dataset= None, eval_examples=None, answer_column_name=answer_column, dataset_name="squad", tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) with operation 1, the GPU memory increases from 16G to 23G
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4944/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4943/comments
https://api.github.com/repos/huggingface/datasets/issues/4943/events
https://github.com/huggingface/datasets/pull/4943
1,363,967,650
PR_kwDODunzps4-eZd_
4,943
Add splits to MBPP dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/2788526?v=4", "events_url": "https://api.github.com/users/cwarny/events{/privacy}", "followers_url": "https://api.github.com/users/cwarny/followers", "following_url": "https://api.github.com/users/cwarny/following{/other_user}", "gists_url": "https://api.github.com/users/cwarny/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cwarny", "id": 2788526, "login": "cwarny", "node_id": "MDQ6VXNlcjI3ODg1MjY=", "organizations_url": "https://api.github.com/users/cwarny/orgs", "received_events_url": "https://api.github.com/users/cwarny/received_events", "repos_url": "https://api.github.com/users/cwarny/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cwarny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cwarny/subscriptions", "type": "User", "url": "https://api.github.com/users/cwarny" }
[]
closed
false
null
[]
null
4
"2022-09-07T01:18:31Z"
"2022-09-13T12:29:19Z"
"2022-09-13T12:27:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4943.diff", "html_url": "https://github.com/huggingface/datasets/pull/4943", "merged_at": "2022-09-13T12:27:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/4943.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4943" }
This PR addresses https://github.com/huggingface/datasets/issues/4795
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4943/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4943/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4942/comments
https://api.github.com/repos/huggingface/datasets/issues/4942/events
https://github.com/huggingface/datasets/issues/4942
1,363,869,421
I_kwDODunzps5RSv7t
4,942
Trec Dataset has incorrect labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4", "events_url": "https://api.github.com/users/wmpauli/events{/privacy}", "followers_url": "https://api.github.com/users/wmpauli/followers", "following_url": "https://api.github.com/users/wmpauli/following{/other_user}", "gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wmpauli", "id": 6539145, "login": "wmpauli", "node_id": "MDQ6VXNlcjY1MzkxNDU=", "organizations_url": "https://api.github.com/users/wmpauli/orgs", "received_events_url": "https://api.github.com/users/wmpauli/received_events", "repos_url": "https://api.github.com/users/wmpauli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions", "type": "User", "url": "https://api.github.com/users/wmpauli" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-09-06T22:13:40Z"
"2022-09-08T11:12:03Z"
"2022-09-08T11:12:03Z"
NONE
null
null
null
## Describe the bug Both coarse and fine labels seem to be out of line. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = "trec" raw_datasets = load_dataset(dataset) df = pd.DataFrame(raw_datasets["test"]) df.head() ``` ## Expected results text (string) | coarse_label (class label) | fine_label (class label) -- | -- | -- How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist) What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city) Who was Galileo ? | 3 (HUM) | 31 (HUM:desc) What is an atom ? | 2 (DESC) | 24 (DESC:def) When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date) ## Actual results index | label-coarse |label-fine | text -- |-- | -- | -- 0 | 4 | 40 | How far is it from Denver to Aspen ? 1 | 5 | 21 | What county is Modesto , California in ? 2 | 3 | 12 | Who was Galileo ? 3 | 0 | 7 | What is an atom ? 4 | 4 | 8 | When did Hawaii become a state ? ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4942/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4941/comments
https://api.github.com/repos/huggingface/datasets/issues/4941/events
https://github.com/huggingface/datasets/pull/4941
1,363,622,861
PR_kwDODunzps4-dQ9F
4,941
Add Papers with Code ID to scifact dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-06T17:46:37Z"
"2022-09-06T18:28:17Z"
"2022-09-06T18:26:01Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4941.diff", "html_url": "https://github.com/huggingface/datasets/pull/4941", "merged_at": "2022-09-06T18:26:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/4941.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4941" }
This PR: - adds Papers with Code ID - forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4941/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4940/comments
https://api.github.com/repos/huggingface/datasets/issues/4940/events
https://github.com/huggingface/datasets/pull/4940
1,363,513,058
PR_kwDODunzps4-c6WY
4,940
Fix multilinguality tag and missing sections in xquad_r dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-06T16:05:35Z"
"2022-09-12T10:11:07Z"
"2022-09-12T10:08:48Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4940.diff", "html_url": "https://github.com/huggingface/datasets/pull/4940", "merged_at": "2022-09-12T10:08:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4940.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4940" }
This PR fixes issue reported on the Hub: - Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4940/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4939/comments
https://api.github.com/repos/huggingface/datasets/issues/4939/events
https://github.com/huggingface/datasets/pull/4939
1,363,468,679
PR_kwDODunzps4-cw4A
4,939
Fix NonMatchingChecksumError in adv_glue dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-06T15:31:16Z"
"2022-09-06T17:42:10Z"
"2022-09-06T17:39:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4939.diff", "html_url": "https://github.com/huggingface/datasets/pull/4939", "merged_at": "2022-09-06T17:39:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4939.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4939" }
Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4939/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4938/comments
https://api.github.com/repos/huggingface/datasets/issues/4938/events
https://github.com/huggingface/datasets/pull/4938
1,363,429,228
PR_kwDODunzps4-coaB
4,938
Remove main branch rename notice
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-09-06T15:03:05Z"
"2022-09-06T16:46:11Z"
"2022-09-06T16:43:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4938.diff", "html_url": "https://github.com/huggingface/datasets/pull/4938", "merged_at": "2022-09-06T16:43:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/4938.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4938" }
We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months) I also unpinned the github issue about the branch renaming
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4938/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4937/comments
https://api.github.com/repos/huggingface/datasets/issues/4937/events
https://github.com/huggingface/datasets/pull/4937
1,363,426,946
PR_kwDODunzps4-cn6W
4,937
Remove deprecated identical_ok
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
1
"2022-09-06T15:01:24Z"
"2022-09-06T22:24:09Z"
"2022-09-06T22:21:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4937.diff", "html_url": "https://github.com/huggingface/datasets/pull/4937", "merged_at": "2022-09-06T22:21:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/4937.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4937" }
`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed: ```python Args: ... identical_ok (`bool`, *optional*, defaults to `True`): Deprecated: will be removed in 0.11.0. Changing this value has no effect. ... ``` There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same. cc @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4937/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4936/comments
https://api.github.com/repos/huggingface/datasets/issues/4936/events
https://github.com/huggingface/datasets/issues/4936
1,363,274,907
I_kwDODunzps5RQeyb
4,936
vivos (Vietnamese speech corpus) dataset not accessible
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
3
"2022-09-06T13:17:55Z"
"2022-09-21T06:06:02Z"
"2022-09-12T07:14:20Z"
CONTRIBUTOR
null
null
null
## Describe the bug VIVOS data is not accessible anymore, neither of these links work (at least from France): * https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data) * https://ailab.hcmus.edu.vn/vivos (dataset page) Therefore `load_dataset` doesn't work. ## Steps to reproduce the bug ```python ds = load_dataset("vivos") ``` ## Expected results dataset loaded ## Actual results ``` ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))"))) ``` Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4936/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4935/comments
https://api.github.com/repos/huggingface/datasets/issues/4935/events
https://github.com/huggingface/datasets/issues/4935
1,363,226,736
I_kwDODunzps5RQTBw
4,935
Dataset Viewer issue for ubuntu_dialogs_corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/87330568?v=4", "events_url": "https://api.github.com/users/CibinQuadance/events{/privacy}", "followers_url": "https://api.github.com/users/CibinQuadance/followers", "following_url": "https://api.github.com/users/CibinQuadance/following{/other_user}", "gists_url": "https://api.github.com/users/CibinQuadance/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CibinQuadance", "id": 87330568, "login": "CibinQuadance", "node_id": "MDQ6VXNlcjg3MzMwNTY4", "organizations_url": "https://api.github.com/users/CibinQuadance/orgs", "received_events_url": "https://api.github.com/users/CibinQuadance/received_events", "repos_url": "https://api.github.com/users/CibinQuadance/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CibinQuadance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CibinQuadance/subscriptions", "type": "User", "url": "https://api.github.com/users/CibinQuadance" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
1
"2022-09-06T12:41:50Z"
"2022-09-06T12:51:25Z"
"2022-09-06T12:51:25Z"
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4935/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4934/comments
https://api.github.com/repos/huggingface/datasets/issues/4934/events
https://github.com/huggingface/datasets/issues/4934
1,363,034,253
I_kwDODunzps5RPkCN
4,934
Dataset Viewer issue for indonesian-nlp/librivox-indonesia
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
6
"2022-09-06T10:03:23Z"
"2022-09-06T12:46:40Z"
"2022-09-06T12:46:40Z"
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia ### Description I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message: ``` Server error Status code: 400 Exception: TypeError Message: unsupported operand type(s) for +: 'NoneType' and 'str' ``` Please help, I am not sure what the problem here is. Thanks a lot. ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4934/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4934/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4933/comments
https://api.github.com/repos/huggingface/datasets/issues/4933/events
https://github.com/huggingface/datasets/issues/4933
1,363,013,023
I_kwDODunzps5RPe2f
4,933
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tianjianjiang", "id": 4812544, "login": "tianjianjiang", "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "type": "User", "url": "https://api.github.com/users/tianjianjiang" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2022-09-06T09:47:48Z"
"2022-09-06T11:44:27Z"
"2022-09-06T11:44:27Z"
CONTRIBUTOR
null
null
null
## Describe the bug `Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. ## Steps to reproduce the bug (In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) ```python from datasets import load_dataset ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead? ds_mc4_ja_2020 = ds_mc4_ja.filter( lambda example: example["timestamp"][:4] == "2020", batched=True, ) ``` ## Expected results No error ## Actual results ```python --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single offset=offset, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] TypeError: zip argument #2 must support iteration """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) /tmp/ipykernel_51348/2345782281.py in <module> 7 batched=True, 8 # batch_size=10_000, ----> 9 num_proc=111, 10 ) 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter( /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 522 } 523 # apply actual function --> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 526 # re-apply format to the output /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2920 new_fingerprint=new_fingerprint, 2921 input_columns=input_columns, -> 2922 desc=desc, 2923 ) 2924 new_dataset = copy.deepcopy(self) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2498 2499 for index, async_result in results.items(): -> 2500 transformed_shards[index] = async_result.get() 2501 2502 assert ( /opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): TypeError: zip argument #2 must support iteration ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4933/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4932/comments
https://api.github.com/repos/huggingface/datasets/issues/4932/events
https://github.com/huggingface/datasets/issues/4932
1,362,522,423
I_kwDODunzps5RNnE3
4,932
Dataset Viewer issue for bigscience-biomedical/biosses
{ "avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4", "events_url": "https://api.github.com/users/galtay/events{/privacy}", "followers_url": "https://api.github.com/users/galtay/followers", "following_url": "https://api.github.com/users/galtay/following{/other_user}", "gists_url": "https://api.github.com/users/galtay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/galtay", "id": 663051, "login": "galtay", "node_id": "MDQ6VXNlcjY2MzA1MQ==", "organizations_url": "https://api.github.com/users/galtay/orgs", "received_events_url": "https://api.github.com/users/galtay/received_events", "repos_url": "https://api.github.com/users/galtay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galtay/subscriptions", "type": "User", "url": "https://api.github.com/users/galtay" }
[]
closed
false
null
[]
null
4
"2022-09-05T22:40:32Z"
"2022-09-06T14:24:56Z"
"2022-09-06T14:24:56Z"
NONE
null
null
null
### Link https://huggingface.co/datasets/bigscience-biomedical/biosses ### Description I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . ``` Status code: 400 Exception: ModuleNotFoundError Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub' ``` ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4932/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4931/comments
https://api.github.com/repos/huggingface/datasets/issues/4931/events
https://github.com/huggingface/datasets/pull/4931
1,362,298,764
PR_kwDODunzps4-Y3L6
4,931
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-09-05T17:03:04Z"
"2022-09-22T12:40:15Z"
"2022-09-06T05:39:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4931.diff", "html_url": "https://github.com/huggingface/datasets/pull/4931", "merged_at": "2022-09-06T05:39:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/4931.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4931" }
Fix missing tags in dataset cards: - coqa - hyperpartisan_news_detection - opinosis - scientific_papers - scifact - search_qa - wiki_qa - wiki_split - wikisql This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4931/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4930/comments
https://api.github.com/repos/huggingface/datasets/issues/4930/events
https://github.com/huggingface/datasets/pull/4930
1,362,193,587
PR_kwDODunzps4-Yflc
4,930
Add cc-by-nc-2.0 to list of licenses
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
5
"2022-09-05T15:37:32Z"
"2022-09-06T16:43:32Z"
"2022-09-05T17:01:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4930.diff", "html_url": "https://github.com/huggingface/datasets/pull/4930", "merged_at": "2022-09-05T17:01:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4930.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4930" }
This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4930/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4930/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4929/comments
https://api.github.com/repos/huggingface/datasets/issues/4929/events
https://github.com/huggingface/datasets/pull/4929
1,361,508,366
PR_kwDODunzps4-WK2w
4,929
Fixes a typo in loading documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4", "events_url": "https://api.github.com/users/sighingnow/events{/privacy}", "followers_url": "https://api.github.com/users/sighingnow/followers", "following_url": "https://api.github.com/users/sighingnow/following{/other_user}", "gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sighingnow", "id": 7144772, "login": "sighingnow", "node_id": "MDQ6VXNlcjcxNDQ3NzI=", "organizations_url": "https://api.github.com/users/sighingnow/orgs", "received_events_url": "https://api.github.com/users/sighingnow/received_events", "repos_url": "https://api.github.com/users/sighingnow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions", "type": "User", "url": "https://api.github.com/users/sighingnow" }
[]
closed
false
null
[]
null
0
"2022-09-05T07:18:54Z"
"2022-09-06T02:11:03Z"
"2022-09-05T13:06:38Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4929.diff", "html_url": "https://github.com/huggingface/datasets/pull/4929", "merged_at": "2022-09-05T13:06:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/4929.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4929" }
As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`. ![image](https://user-images.githubusercontent.com/7144772/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4929/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4928/comments
https://api.github.com/repos/huggingface/datasets/issues/4928/events
https://github.com/huggingface/datasets/pull/4928
1,360,941,172
PR_kwDODunzps4-Ubi4
4,928
Add ability to read-write to SQL databases.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[]
closed
false
null
[]
null
14
"2022-09-03T19:09:08Z"
"2022-10-03T16:34:36Z"
"2022-10-03T16:32:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4928.diff", "html_url": "https://github.com/huggingface/datasets/pull/4928", "merged_at": "2022-10-03T16:32:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/4928.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4928" }
Fixes #3094 Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy. I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional. I also recorded a Loom to showcase the feature. https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 2, "heart": 4, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4928/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4927/comments
https://api.github.com/repos/huggingface/datasets/issues/4927/events
https://github.com/huggingface/datasets/pull/4927
1,360,428,139
PR_kwDODunzps4-S0we
4,927
fix BLEU metric card
{ "avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4", "events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}", "followers_url": "https://api.github.com/users/antoniolanza1996/followers", "following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}", "gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antoniolanza1996", "id": 40452030, "login": "antoniolanza1996", "node_id": "MDQ6VXNlcjQwNDUyMDMw", "organizations_url": "https://api.github.com/users/antoniolanza1996/orgs", "received_events_url": "https://api.github.com/users/antoniolanza1996/received_events", "repos_url": "https://api.github.com/users/antoniolanza1996/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions", "type": "User", "url": "https://api.github.com/users/antoniolanza1996" }
[]
closed
false
null
[]
null
0
"2022-09-02T17:00:56Z"
"2022-09-09T16:28:15Z"
"2022-09-09T16:28:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4927.diff", "html_url": "https://github.com/huggingface/datasets/pull/4927", "merged_at": "2022-09-09T16:28:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4927.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4927" }
I've fixed some typos in BLEU metric card.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4927/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4927/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4926/comments
https://api.github.com/repos/huggingface/datasets/issues/4926/events
https://github.com/huggingface/datasets/pull/4926
1,360,384,484
PR_kwDODunzps4-Srm1
4,926
Dataset infos in yaml
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
6
"2022-09-02T16:10:05Z"
"2022-10-03T09:13:07Z"
"2022-10-03T09:11:12Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4926.diff", "html_url": "https://github.com/huggingface/datasets/pull/4926", "merged_at": "2022-10-03T09:11:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/4926.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4926" }
To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place. To be more specific, I moved these fields from DatasetInfo to the YAML: - config_name (if there are several configs) - download_size - dataset_size - features - splits Here is what I ended up with for `squad`: ```yaml dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 79346360 num_examples: 87599 - name: validation num_bytes: 10473040 num_examples: 10570 config_name: plain_text download_size: 35142551 dataset_size: 89819400 ``` and it can be a list if there are several configs I already did the change for `conll2000` and `crime_and_punish` as an example. ## Implementation details ### Load/Read This is done via `DatasetInfosDict.write_to_directory/from_directory` I had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`. The first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data. ### Other changes I had to update the DatasetModule factories to also download the README.md alongside the dataset scripts/data files, and not just the dataset_infos.json ## YAML validation I removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation ## Datasets-cli The `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file ## Backward compatibility `dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility. Though I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read. ## TODO - [x] add comments - [x] tests - [x] document the new YAML fields - [x] try to reload the new dataset_infos.json file content with an old version of `datasets` ## EDITS - removed "config_name" when there's only one config - removed "version" for now (?), because it's not useful in general - renamed the yaml field "dataset_info" instead of "dataset_infos", since it only has one by default (and because "infos" is not english) Fix https://github.com/huggingface/datasets/issues/4876
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4926/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4926/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4925/comments
https://api.github.com/repos/huggingface/datasets/issues/4925/events
https://github.com/huggingface/datasets/pull/4925
1,360,007,616
PR_kwDODunzps4-RbP5
4,925
Add note about loading image / audio files to docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
9
"2022-09-02T10:31:58Z"
"2022-09-26T12:21:30Z"
"2022-09-23T13:59:07Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4925.diff", "html_url": "https://github.com/huggingface/datasets/pull/4925", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4925.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4925" }
This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure. Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447 cc @NielsRogge
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4925/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4925/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4924/comments
https://api.github.com/repos/huggingface/datasets/issues/4924/events
https://github.com/huggingface/datasets/issues/4924
1,358,611,513
I_kwDODunzps5Q-sQ5
4,924
Concatenate_datasets loads everything into RAM
{ "avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4", "events_url": "https://api.github.com/users/louisdeneve/events{/privacy}", "followers_url": "https://api.github.com/users/louisdeneve/followers", "following_url": "https://api.github.com/users/louisdeneve/following{/other_user}", "gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/louisdeneve", "id": 39416047, "login": "louisdeneve", "node_id": "MDQ6VXNlcjM5NDE2MDQ3", "organizations_url": "https://api.github.com/users/louisdeneve/orgs", "received_events_url": "https://api.github.com/users/louisdeneve/received_events", "repos_url": "https://api.github.com/users/louisdeneve/repos", "site_admin": false, "starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions", "type": "User", "url": "https://api.github.com/users/louisdeneve" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2022-09-01T10:25:17Z"
"2022-09-01T11:50:54Z"
"2022-09-01T11:50:54Z"
NONE
null
null
null
## Describe the bug When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance ## Steps to reproduce the bug ```python gcs = gcsfs.GCSFileSystem(project='project') datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)] dataset = concatenate_datasets(datasets) ``` ## Expected results A concatenated dataset which is stored on my disk. ## Actual results Concatenated dataset gets loaded into RAM and overflows it which gets the process killed. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.1 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4924/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4923/comments
https://api.github.com/repos/huggingface/datasets/issues/4923/events
https://github.com/huggingface/datasets/pull/4923
1,357,735,287
PR_kwDODunzps4-Jv7C
4,923
decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
5
"2022-08-31T18:57:59Z"
"2022-11-02T11:54:33Z"
"2022-09-20T13:12:52Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4923.diff", "html_url": "https://github.com/huggingface/datasets/pull/4923", "merged_at": "2022-09-20T13:12:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/4923.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4923" }
`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works) another option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable) - [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster - [x] tests - [x] DO NOT FORGET to get back all the tests see https://github.com/huggingface/datasets/issues/4776 and https://github.com/huggingface/datasets/issues/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4923/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4923/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4922/comments
https://api.github.com/repos/huggingface/datasets/issues/4922/events
https://github.com/huggingface/datasets/issues/4922
1,357,684,018
I_kwDODunzps5Q7J0y
4,922
I/O error on Google Colab in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4", "events_url": "https://api.github.com/users/jotterbach/events{/privacy}", "followers_url": "https://api.github.com/users/jotterbach/followers", "following_url": "https://api.github.com/users/jotterbach/following{/other_user}", "gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jotterbach", "id": 5595043, "login": "jotterbach", "node_id": "MDQ6VXNlcjU1OTUwNDM=", "organizations_url": "https://api.github.com/users/jotterbach/orgs", "received_events_url": "https://api.github.com/users/jotterbach/received_events", "repos_url": "https://api.github.com/users/jotterbach/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions", "type": "User", "url": "https://api.github.com/users/jotterbach" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2022-08-31T18:08:26Z"
"2022-08-31T18:15:48Z"
"2022-08-31T18:15:48Z"
NONE
null
null
null
## Describe the bug When trying to load a streaming dataset in Google Colab the loading fails with an I/O error ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) list(hf_ds.take(5)) ``` ## Expected results It should load five data points ## Actual results ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module> 2 from datasets import load_dataset 3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) ----> 4 list(hf_ds.take(5)) 6 frames [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 716 717 def __iter__(self): --> 718 for key, example in self._iter(): 719 if self.features: 720 # `IterableDataset` automatically fills missing columns with None. [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self) 706 else: 707 ex_iterable = self._ex_iterable --> 708 yield from ex_iterable 709 710 def _iter_shard(self, shard_idx: int): [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 582 583 def __iter__(self): --> 584 yield from islice(self.ex_iterable, self.n) 585 586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable": [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 110 111 def __iter__(self): --> 112 yield from self.generate_examples_fn(**self.kwargs) 113 114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable": [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation) 845 raise ValueError("Invalid number of files: %d" % len(files)) 846 --> 847 for sub_key, ex in sub_generator(*sub_generator_args): 848 if not all(ex.values()): 849 continue [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2) 923 l2_sentences, l2 = parse_file(f2_i, filename2) 924 --> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)): 926 key = f"{f_id}/{line_id}" 927 yield key, {l1: s1, l2: s2} [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen() 895 896 def gen(): --> 897 with open(path, encoding="utf-8") as f: 898 for line in f: 899 seg_match = re.match(seg_re, line) ValueError: I/O operation on closed file. ``` ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0) - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4922/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/4921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4921/comments
https://api.github.com/repos/huggingface/datasets/issues/4921/events
https://github.com/huggingface/datasets/pull/4921
1,357,609,003
PR_kwDODunzps4-JVFV
4,921
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-31T16:52:27Z"
"2022-09-22T14:34:11Z"
"2022-09-01T05:04:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4921.diff", "html_url": "https://github.com/huggingface/datasets/pull/4921", "merged_at": "2022-09-01T05:04:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/4921.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4921" }
Fix missing tags in dataset cards: - eraser_multi_rc - hotpot_qa - metooma - movie_rationales - qanta - quora - quoref - race - ted_hrlr - ted_talks_iwslt This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4921/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4921/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4920/comments
https://api.github.com/repos/huggingface/datasets/issues/4920/events
https://github.com/huggingface/datasets/issues/4920
1,357,564,589
I_kwDODunzps5Q6sqt
4,920
Unable to load local tsv files through load_dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4", "events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}", "followers_url": "https://api.github.com/users/DataNoob0723/followers", "following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}", "gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DataNoob0723", "id": 44038517, "login": "DataNoob0723", "node_id": "MDQ6VXNlcjQ0MDM4NTE3", "organizations_url": "https://api.github.com/users/DataNoob0723/orgs", "received_events_url": "https://api.github.com/users/DataNoob0723/received_events", "repos_url": "https://api.github.com/users/DataNoob0723/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions", "type": "User", "url": "https://api.github.com/users/DataNoob0723" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2022-08-31T16:13:39Z"
"2022-09-01T05:31:30Z"
"2022-09-01T05:31:30Z"
NONE
null
null
null
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions. ## Actual results --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module> ----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv') 2 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1246 ) from None 1247 raise e1 from None 1248 else: FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4920/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4919/comments
https://api.github.com/repos/huggingface/datasets/issues/4919/events
https://github.com/huggingface/datasets/pull/4919
1,357,441,599
PR_kwDODunzps4-IxDZ
4,919
feat: improve error message on Keys mismatch. closes #4917
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[]
closed
false
null
[]
null
2
"2022-08-31T14:41:36Z"
"2022-09-05T08:46:01Z"
"2022-09-05T08:43:33Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4919.diff", "html_url": "https://github.com/huggingface/datasets/pull/4919", "merged_at": "2022-09-05T08:43:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4919.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4919" }
Hi @lhoestq what do you think? Let me give you a code sample: ```py >>> import datasets >>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]}) >>> foo.save_to_disk('foo') # edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz' >>> datasets.load_from_disk('foo') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-4863e606b330> in <module> ----> 1 datasets.load_from_disk('foo') ~/code/datasets/src/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory) 1851 raise FileNotFoundError(f"Directory {dataset_path} not found") 1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()): -> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) 1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()): 1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) ~/code/datasets/src/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory) 1230 info=dataset_info, 1231 split=split, -> 1232 fingerprint=state["_fingerprint"], 1233 ) 1234 ~/code/datasets/src/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 687 self.info.features = inferred_features 688 else: # make sure the nested columns are in the right order --> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features) 690 691 # Infer fingerprint if None ~/code/datasets/src/datasets/features/features.py in reorder_fields_as(self, other) 1771 return source 1772 -> 1773 return Features(recursive_reorder(self, other)) 1774 1775 def flatten(self, max_depth=16) -> "Features": ~/code/datasets/src/datasets/features/features.py in recursive_reorder(source, target, stack) 1760 f"{source.keys()-target.keys()} are missing from dataset.arrow " 1761 f"and {target.keys()-source.keys()} are missing from dataset_info.json"+stack_position) -> 1762 raise ValueError(message) 1763 return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target} 1764 elif isinstance(source, list): ValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow). {'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4919/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4918/comments
https://api.github.com/repos/huggingface/datasets/issues/4918/events
https://github.com/huggingface/datasets/issues/4918
1,357,242,757
I_kwDODunzps5Q5eGF
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
{ "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/finiteautomata", "id": 167943, "login": "finiteautomata", "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "repos_url": "https://api.github.com/users/finiteautomata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "type": "User", "url": "https://api.github.com/users/finiteautomata" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
2
"2022-08-31T12:09:07Z"
"2022-09-05T21:36:34Z"
"2022-09-05T16:32:44Z"
NONE
null
null
null
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4918/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4917/comments
https://api.github.com/repos/huggingface/datasets/issues/4917/events
https://github.com/huggingface/datasets/issues/4917
1,357,193,841
I_kwDODunzps5Q5SJx
4,917
Keys mismatch: make error message more informative
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
4
"2022-08-31T11:24:34Z"
"2022-09-05T08:43:38Z"
"2022-09-05T08:43:38Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like: `ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}` Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset. **Describe the solution you'd like** The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`. Willing to help :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4917/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4917/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4916/comments
https://api.github.com/repos/huggingface/datasets/issues/4916/events
https://github.com/huggingface/datasets/issues/4916
1,357,076,940
I_kwDODunzps5Q41nM
4,916
Apache Beam unable to write the downloaded wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shilpac20", "id": 71849081, "login": "Shilpac20", "node_id": "MDQ6VXNlcjcxODQ5MDgx", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "repos_url": "https://api.github.com/users/Shilpac20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "type": "User", "url": "https://api.github.com/users/Shilpac20" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2022-08-31T09:39:25Z"
"2022-08-31T10:53:19Z"
"2022-08-31T10:53:19Z"
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4916/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4915/comments
https://api.github.com/repos/huggingface/datasets/issues/4915/events
https://github.com/huggingface/datasets/issues/4915
1,356,009,042
I_kwDODunzps5Q0w5S
4,915
FileNotFoundError while downloading wikipedia dataset for any language
{ "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shilpac20", "id": 71849081, "login": "Shilpac20", "node_id": "MDQ6VXNlcjcxODQ5MDgx", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "repos_url": "https://api.github.com/users/Shilpac20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "type": "User", "url": "https://api.github.com/users/Shilpac20" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
5
"2022-08-30T16:15:46Z"
"2022-12-04T22:20:33Z"
null
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. Environment: ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in <module> beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4915/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4915/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/4914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4914/comments
https://api.github.com/repos/huggingface/datasets/issues/4914/events
https://github.com/huggingface/datasets/pull/4914
1,355,482,624
PR_kwDODunzps4-CFyN
4,914
Support streaming swda dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-30T09:46:28Z"
"2022-08-30T11:16:33Z"
"2022-08-30T11:14:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4914.diff", "html_url": "https://github.com/huggingface/datasets/pull/4914", "merged_at": "2022-08-30T11:14:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4914" }
Support streaming swda dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4914/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4914/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4913/comments
https://api.github.com/repos/huggingface/datasets/issues/4913/events
https://github.com/huggingface/datasets/pull/4913
1,355,232,007
PR_kwDODunzps4-BP00
4,913
Add license and citation information to cosmos_qa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-30T06:23:19Z"
"2022-08-30T09:49:31Z"
"2022-08-30T09:47:35Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4913.diff", "html_url": "https://github.com/huggingface/datasets/pull/4913", "merged_at": "2022-08-30T09:47:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/4913.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4913" }
This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0. This PR also updates the citation information.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4913/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4912/comments
https://api.github.com/repos/huggingface/datasets/issues/4912/events
https://github.com/huggingface/datasets/issues/4912
1,355,078,864
I_kwDODunzps5QxNzQ
4,912
datasets map() handles all data at a stroke and takes long time
{ "avatar_url": "https://avatars.githubusercontent.com/u/40711748?v=4", "events_url": "https://api.github.com/users/BruceStayHungry/events{/privacy}", "followers_url": "https://api.github.com/users/BruceStayHungry/followers", "following_url": "https://api.github.com/users/BruceStayHungry/following{/other_user}", "gists_url": "https://api.github.com/users/BruceStayHungry/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BruceStayHungry", "id": 40711748, "login": "BruceStayHungry", "node_id": "MDQ6VXNlcjQwNzExNzQ4", "organizations_url": "https://api.github.com/users/BruceStayHungry/orgs", "received_events_url": "https://api.github.com/users/BruceStayHungry/received_events", "repos_url": "https://api.github.com/users/BruceStayHungry/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BruceStayHungry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceStayHungry/subscriptions", "type": "User", "url": "https://api.github.com/users/BruceStayHungry" }
[]
closed
false
null
[]
null
7
"2022-08-30T02:25:56Z"
"2023-04-06T09:43:58Z"
"2022-09-06T09:23:35Z"
NONE
null
null
null
**1. Background** Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. The corresponding code: ``` with accelerator.main_process_first(): tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on every text in dataset" ) ``` **2. The problem** Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize. Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization. **3. My question** As described above, my questions are: * **Which is better? Process in `map()` or in `data-collator`** * **Why huggingface advises `map()` function?** There should be some advantages to using `map()` Thanks for your answers!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4912/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4912/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4911/comments
https://api.github.com/repos/huggingface/datasets/issues/4911/events
https://github.com/huggingface/datasets/issues/4911
1,354,426,978
I_kwDODunzps5Quupi
4,911
[Tests] Ensure `datasets` supports renamed repositories
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
null
[]
null
1
"2022-08-29T14:46:14Z"
"2022-08-29T15:31:03Z"
null
MEMBER
null
null
null
On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well. However it would be nice to have an integration test to make sure we don't break support for renamed datasets. To implement this we can use the /api/repos/move endpoint on hub-ci to rename/move a repo (it is documented at https://huggingface.co/docs/hub/api)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4911/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4911/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4910/comments
https://api.github.com/repos/huggingface/datasets/issues/4910/events
https://github.com/huggingface/datasets/issues/4910
1,354,374,328
I_kwDODunzps5Quhy4
4,910
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
{ "avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4", "events_url": "https://api.github.com/users/bablf/events{/privacy}", "followers_url": "https://api.github.com/users/bablf/followers", "following_url": "https://api.github.com/users/bablf/following{/other_user}", "gists_url": "https://api.github.com/users/bablf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bablf", "id": 57184353, "login": "bablf", "node_id": "MDQ6VXNlcjU3MTg0MzUz", "organizations_url": "https://api.github.com/users/bablf/orgs", "received_events_url": "https://api.github.com/users/bablf/received_events", "repos_url": "https://api.github.com/users/bablf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bablf/subscriptions", "type": "User", "url": "https://api.github.com/users/bablf" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thepurpleowl", "id": 21123710, "login": "thepurpleowl", "node_id": "MDQ6VXNlcjIxMTIzNzEw", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "type": "User", "url": "https://api.github.com/users/thepurpleowl" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thepurpleowl", "id": 21123710, "login": "thepurpleowl", "node_id": "MDQ6VXNlcjIxMTIzNzEw", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "type": "User", "url": "https://api.github.com/users/thepurpleowl" } ]
null
7
"2022-08-29T14:11:48Z"
"2022-09-13T11:58:46Z"
null
NONE
null
null
null
## Describe the bug In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz"). I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be ```python builder_cls = import_main_class(dataset_module.module_path) builder_kwargs = dataset_module.builder_kwargs data_files = builder_kwargs.pop("data_files", data_files) config_name = builder_kwargs.pop("config_name", name) hash = builder_kwargs.pop("hash") base_path = builder_kwargs.pop("base_path") ``` and then pass base_path into `builder_cls`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("rotten_tomatoes", base_path="./sample_data") ``` ## Expected results The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder). So I would expect to be able to pass the base_path into `load_dataset()`. ## Actual results TypeError("type object got multiple values for keyword argument "base_path"). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.8.9 - PyArrow version: 9.0.0
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4910/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4909/comments
https://api.github.com/repos/huggingface/datasets/issues/4909/events
https://github.com/huggingface/datasets/pull/4909
1,353,997,788
PR_kwDODunzps499Fhe
4,909
Update GLUE evaluation metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
1
"2022-08-29T09:43:44Z"
"2022-08-29T14:53:29Z"
"2022-08-29T14:51:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4909.diff", "html_url": "https://github.com/huggingface/datasets/pull/4909", "merged_at": "2022-08-29T14:51:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4909.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4909" }
This PR updates the evaluation metadata for GLUE to: * Include defaults for all configs except `ax` (which only has a `test` split with no known labels) * Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private) * Fix the `task_id` for some existing defaults cc @sashavor @douwekiela
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4909/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4909/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4908/comments
https://api.github.com/repos/huggingface/datasets/issues/4908/events
https://github.com/huggingface/datasets/pull/4908
1,353,995,574
PR_kwDODunzps499FDS
4,908
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-29T09:41:53Z"
"2022-09-22T14:35:56Z"
"2022-08-29T16:13:07Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4908.diff", "html_url": "https://github.com/huggingface/datasets/pull/4908", "merged_at": "2022-08-29T16:13:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4908.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4908" }
Fix missing tags in dataset cards: - asnq - clue - common_gen - cosmos_qa - guardian_authorship - hindi_discourse - py_ast - x_stance This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4908/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
https://api.github.com/repos/huggingface/datasets/issues/4907/events
https://github.com/huggingface/datasets/issues/4907
1,353,808,348
I_kwDODunzps5QsXnc
4,907
None Type error for swda datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hannan72", "id": 8229163, "login": "hannan72", "node_id": "MDQ6VXNlcjgyMjkxNjM=", "organizations_url": "https://api.github.com/users/hannan72/orgs", "received_events_url": "https://api.github.com/users/hannan72/received_events", "repos_url": "https://api.github.com/users/hannan72/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "type": "User", "url": "https://api.github.com/users/hannan72" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
3
"2022-08-29T07:05:20Z"
"2022-08-30T14:43:41Z"
"2022-08-30T14:43:41Z"
NONE
null
null
null
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Python version: 3.8.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
https://api.github.com/repos/huggingface/datasets/issues/4906/events
https://github.com/huggingface/datasets/issues/4906
1,353,223,925
I_kwDODunzps5QqI71
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
{ "avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4", "events_url": "https://api.github.com/users/OPterminator/events{/privacy}", "followers_url": "https://api.github.com/users/OPterminator/followers", "following_url": "https://api.github.com/users/OPterminator/following{/other_user}", "gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/OPterminator", "id": 63536981, "login": "OPterminator", "node_id": "MDQ6VXNlcjYzNTM2OTgx", "organizations_url": "https://api.github.com/users/OPterminator/orgs", "received_events_url": "https://api.github.com/users/OPterminator/received_events", "repos_url": "https://api.github.com/users/OPterminator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions", "type": "User", "url": "https://api.github.com/users/OPterminator" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
6
"2022-08-28T02:23:24Z"
"2023-10-27T20:08:28Z"
"2022-10-03T12:22:50Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.pyplot as plt import pandas as pd import sys import tensorflow as tf import plotly.express as px import transformers import tokenizers import nlp as nlp import utils import datasets ``` ## Expected results A clear and concise description of the expected results. import should work normal ## Actual results Specify the actual results or traceback. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-b3b5b0b62103> in <module> 13 import nlp as nlp 14 import utils ---> 15 import datasets ~\anaconda3\lib\site-packages\datasets\__init__.py in <module> 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled 45 from .info import DatasetInfo, MetricInfo ---> 46 from .inspect import ( 47 get_dataset_config_info, 48 get_dataset_config_names, ~\anaconda3\lib\site-packages\datasets\inspect.py in <module> 28 from .download.streaming_download_manager import StreamingDownloadManager 29 from .info import DatasetInfo ---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory 31 from .utils.file_utils import relative_to_absolute_path 32 from .utils.logging import get_logger ~\anaconda3\lib\site-packages\datasets\load.py in <module> 53 from .iterable_dataset import IterableDataset 54 from .metric import Metric ---> 55 from .packaged_modules import ( 56 _EXTENSION_TO_MODULE, 57 _MODULE_SUPPORTS_METADATA, ~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module> 4 from typing import List 5 ----> 6 from .csv import csv 7 from .imagefolder import imagefolder 8 from .json import json ~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module> 13 14 ---> 15 logger = datasets.utils.logging.get_logger(__name__) 16 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"] AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.8.8 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "+1": 5, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4904/comments
https://api.github.com/repos/huggingface/datasets/issues/4904/events
https://github.com/huggingface/datasets/pull/4904
1,353,002,837
PR_kwDODunzps4959Ad
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
closed
false
null
[]
null
2
"2022-08-27T10:04:57Z"
"2022-08-30T10:06:21Z"
"2022-08-30T10:03:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4904.diff", "html_url": "https://github.com/huggingface/datasets/pull/4904", "merged_at": "2022-08-30T10:03:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4904" }
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212 https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219 The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`. When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263 Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`). This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4903/comments
https://api.github.com/repos/huggingface/datasets/issues/4903/events
https://github.com/huggingface/datasets/pull/4903
1,352,539,075
PR_kwDODunzps494aud
4,903
Fix CI reporting
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-26T17:16:30Z"
"2022-08-26T17:49:33Z"
"2022-08-26T17:46:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4903.diff", "html_url": "https://github.com/huggingface/datasets/pull/4903", "merged_at": "2022-08-26T17:46:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/4903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4903" }
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary. This PR fixes a regression introduced by: - #4845 This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4902/comments
https://api.github.com/repos/huggingface/datasets/issues/4902/events
https://github.com/huggingface/datasets/issues/4902
1,352,469,196
I_kwDODunzps5QnQrM
4,902
Name the default config `default`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
1
"2022-08-26T16:16:22Z"
"2023-07-24T21:15:31Z"
"2023-07-24T21:15:31Z"
CONTRIBUTOR
null
null
null
Currently, if a dataset has no configuration, a default configuration is created from the dataset name. For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`. It might be easier to handle to set it to `default`, or another reserved word.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4902/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4902/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4901/comments
https://api.github.com/repos/huggingface/datasets/issues/4901/events
https://github.com/huggingface/datasets/pull/4901
1,352,438,915
PR_kwDODunzps494FNX
4,901
Raise ManualDownloadError from get_dataset_config_info
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-26T15:45:56Z"
"2022-08-30T10:42:21Z"
"2022-08-30T10:40:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4901.diff", "html_url": "https://github.com/huggingface/datasets/pull/4901", "merged_at": "2022-08-30T10:40:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4901" }
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download. Related to: - #4898 CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4901/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4900/comments
https://api.github.com/repos/huggingface/datasets/issues/4900/events
https://github.com/huggingface/datasets/issues/4900
1,352,405,855
I_kwDODunzps5QnBNf
4,900
Dataset Viewer issue for asaxena1990/Dummy_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/56627657?v=4", "events_url": "https://api.github.com/users/ankurcl/events{/privacy}", "followers_url": "https://api.github.com/users/ankurcl/followers", "following_url": "https://api.github.com/users/ankurcl/following{/other_user}", "gists_url": "https://api.github.com/users/ankurcl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ankurcl", "id": 56627657, "login": "ankurcl", "node_id": "MDQ6VXNlcjU2NjI3NjU3", "organizations_url": "https://api.github.com/users/ankurcl/orgs", "received_events_url": "https://api.github.com/users/ankurcl/received_events", "repos_url": "https://api.github.com/users/ankurcl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ankurcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankurcl/subscriptions", "type": "User", "url": "https://api.github.com/users/ankurcl" }
[]
closed
false
null
[]
null
3
"2022-08-26T15:15:44Z"
"2023-07-24T15:42:09Z"
"2023-07-24T15:42:09Z"
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4900/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4899/comments
https://api.github.com/repos/huggingface/datasets/issues/4899/events
https://github.com/huggingface/datasets/pull/4899
1,352,031,286
PR_kwDODunzps492uTO
4,899
Re-add code and und language tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-26T09:48:57Z"
"2022-08-26T10:27:18Z"
"2022-08-26T10:24:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4899.diff", "html_url": "https://github.com/huggingface/datasets/pull/4899", "merged_at": "2022-08-26T10:24:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/4899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4899" }
This PR fixes the removal of 2 language tags done by: - #4882 The tags are: - "code": this is not a IANA tag but needed - "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af - used in "mc4" and "udhr" datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
https://api.github.com/repos/huggingface/datasets/issues/4898/events
https://github.com/huggingface/datasets/issues/4898
1,351,851,254
I_kwDODunzps5Qk5z2
4,898
Dataset Viewer issue for timit_asr
{ "avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4", "events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}", "followers_url": "https://api.github.com/users/InayatUllah932/followers", "following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}", "gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/InayatUllah932", "id": 91126978, "login": "InayatUllah932", "node_id": "MDQ6VXNlcjkxMTI2OTc4", "organizations_url": "https://api.github.com/users/InayatUllah932/orgs", "received_events_url": "https://api.github.com/users/InayatUllah932/received_events", "repos_url": "https://api.github.com/users/InayatUllah932/repos", "site_admin": false, "starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions", "type": "User", "url": "https://api.github.com/users/InayatUllah932" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
5
"2022-08-26T07:12:05Z"
"2022-10-03T12:40:28Z"
"2022-10-03T12:40:27Z"
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
https://api.github.com/repos/huggingface/datasets/issues/4897/events
https://github.com/huggingface/datasets/issues/4897
1,351,784,727
I_kwDODunzps5QkpkX
4,897
datasets generate large arrow file
{ "avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4", "events_url": "https://api.github.com/users/jax11235/events{/privacy}", "followers_url": "https://api.github.com/users/jax11235/followers", "following_url": "https://api.github.com/users/jax11235/following{/other_user}", "gists_url": "https://api.github.com/users/jax11235/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jax11235", "id": 18533904, "login": "jax11235", "node_id": "MDQ6VXNlcjE4NTMzOTA0", "organizations_url": "https://api.github.com/users/jax11235/orgs", "received_events_url": "https://api.github.com/users/jax11235/received_events", "repos_url": "https://api.github.com/users/jax11235/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jax11235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jax11235/subscriptions", "type": "User", "url": "https://api.github.com/users/jax11235" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2022-08-26T05:51:16Z"
"2022-09-18T05:07:52Z"
"2022-09-18T05:07:52Z"
NONE
null
null
null
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4896/comments
https://api.github.com/repos/huggingface/datasets/issues/4896/events
https://github.com/huggingface/datasets/pull/4896
1,351,180,409
PR_kwDODunzps49z4fU
4,896
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2022-08-25T16:41:43Z"
"2022-09-22T14:37:16Z"
"2022-08-26T04:41:48Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4896.diff", "html_url": "https://github.com/huggingface/datasets/pull/4896", "merged_at": "2022-08-26T04:41:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4896.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4896" }
Fix missing tags in dataset cards: - anli - coarse_discourse - commonsense_qa - cos_e - ilist - lc_quad - web_questions - xsum This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4896/timeline
null
null
true