url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.15B
| node_id
stringlengths 18
32
| number
int64 1
3.77k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,645B
| updated_at
int64 1,587B
1,645B
| closed_at
int64 1,587B
1,645B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1939/comments | https://api.github.com/repos/huggingface/datasets/issues/1939/events | https://github.com/huggingface/datasets/issues/1939 | 815,680,510 | MDU6SXNzdWU4MTU2ODA1MTA= | 1,939 | [firewalled env] OFFLINE mode | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n",
"I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.",
"Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n",
"FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.",
"Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.",
"Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!",
"I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!"
] | 1,614,186,822,000 | 1,614,920,994,000 | 1,614,920,994,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:
```
DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...
```
`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem
```
DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Common Issues
1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided
```
if dataset and path in _PACKAGED_DATASETS_MODULES:
```
2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging.
I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`
Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1939/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1938/comments | https://api.github.com/repos/huggingface/datasets/issues/1938/events | https://github.com/huggingface/datasets/pull/1938 | 815,647,774 | MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw | 1,938 | Disallow ClassLabel with no names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,184,677,000 | 1,614,252,449,000 | 1,614,252,449,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1938",
"html_url": "https://github.com/huggingface/datasets/pull/1938",
"diff_url": "https://github.com/huggingface/datasets/pull/1938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1938.patch",
"merged_at": 1614252449000
} | It was possible to create a ClassLabel without specifying the names or the number of classes.
This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str.
cc @justin-yan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1938/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1937/comments | https://api.github.com/repos/huggingface/datasets/issues/1937/events | https://github.com/huggingface/datasets/issues/1937 | 815,163,943 | MDU6SXNzdWU4MTUxNjM5NDM= | 1,937 | CommonGen dataset page shows an error OSError: [Errno 28] No space left on device | {
"login": "yuchenlin",
"id": 10104354,
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenlin",
"html_url": "https://github.com/yuchenlin",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.",
"We just fixed the issue, thanks for reporting !"
] | 1,614,149,253,000 | 1,614,337,806,000 | 1,614,337,806,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1937/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1936/comments | https://api.github.com/repos/huggingface/datasets/issues/1936/events | https://github.com/huggingface/datasets/pull/1936 | 814,726,512 | MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4 | 1,936 | [WIP] Adding Support for Reading Pandas Category | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks ! could you maybe add a few tests in test_arrow_dataset.py to make sure from_pandas works as expected with categorical types ?\r\n\r\nIn particular I'm pretty sure that if you now try to `cast` the dataset to the same features at its current features, it will break instead of just being a no-op.\r\nThis is because `features.type` returns an arrow int64 type for the classlabel column instead of the arrow dictionary type that you have in the arrow table. There are two issues in this case:\r\n- it will try to replace the arrow type from dictionary to int64 instead of being a no-op\r\n- it will crash because pyarrow is not able to cast a dictionary to int64 (even if it's actually possible do cast the column by hand by accessing the sub-array of the dictionary array containing the indices/integers)\r\n\r\nIt would be awesome to fix this case ! Ideally the arrow `pa_type` of classlabel ([here](https://github.com/huggingface/datasets/blob/7072e1becd69d421d863374b825e3da4c6551798/src/datasets/features.py#L558)) should be an arrow dictionary type. This should fix the issue. Then we can start working on backward compatibility.\r\n\r\nLet me know if you have questions or if I can help.\r\nIn particular if there is some glue-ing to do I can take care of that if you want ;)\r\n\r\n--------------\r\n\r\nAlso just a few information regarding the functions you mentioned\r\n\r\n`int2str` and `str2int` are used by users to transforms the labels if they want to. Here sine ClassLabel is instantiated without the class names, they would crash. I was about to make a PR to disallow the creation of an empty ClassLabel feature type.\r\nTherefore can you provide class_names= when creating the ClassLabel ?\r\n\r\n`encode_example` is mostly used with a dataset builder (e.g. squad.py) so it's not used when using .from_pandas.\r\n\r\n\r\n",
"Got it - that's super helpful, I was trying to figure out what would break!\r\n\r\nI think there are two issues we're discussing here:\r\n\r\n1. modifying the pa_type of ClassLabel: totally agree with you on that one if that's OK from a back-compat perspective. (i.e. are users of `datasets` not supposed to access or use the .pa_type attribute of ClassLabel?)\r\n2. creating a ClassLabel requires information that's not present on the pa.DictionaryType object: I think the crux of the problem is that at this line (https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR933) - you only have access to the `pa_type`, which is `DictionaryType[int8, string]`. I've unpacked it and looked at all of the available methods, and I don't believe that any of the actual values (\"names\") are present - those are stored on the `pyarrow.DictArray.dictionary` attribute (i.e. as data, not on the pyarrow.DataType) - so in order to actually be able to instantiate the ClassLabel with the names= parameter, we need to pass in more information to this method.\r\n\r\nWe *could* mostly accomplish this by modifying https://github.com/huggingface/datasets/pull/1936/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR909 to accept a pyarrow Table in addition to the type, and it's not too difficult to do, but it feels a little bit off to me:\r\n\r\n- It feels a bit off that a \"schema\" definition will change depending on what data gets added to the dataset. In particular, if someone adds rows or concatenates two datasets, the ClassLabel \"names\" will also need to change, right? I think maybe we're getting around this because a Dataset is immutable (I think?) and so any new dataset is freshly constructed, but for example - I think this check wouldn't work for `ClassLabel`s if we were to compare the `Dataset.features` instead of the underlying pyarrow type https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L2664\r\n- To that end I wonder if ClassLabel should actually just be the \"type\" akin to Category, and the \"names\" should be considered \"data\" and not part of the \"type\"? Similar to how pyarrow maintains two data objects - the array of indices and the array of string values.\r\n\r\nWith that in mind, I'm wondering if you *should* allow an empty ClassLabel (and`int2str`, etc. can be updated to have more descriptive error messages if labels aren't provided or inferred), and if the underlying data is a pa.DictionaryType, then the names can be inferred and applied at these points in the code:\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L274\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L686\r\n- https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L673\r\n\r\nI think perhaps the mismatch here is when the data is stored on disk as an int there should be a convenient way of saying \"this is a dictionary and here are some explicitly provided labels\", whereas when it's stored as a string, we'd ideally like to say \"this is a Category and please condense the representation and automatically infer the labels\".\r\n\r\nSorry for the long comment! Hopefully my thoughts make sense - thanks for taking the time to discuss!",
"Yes that makes sense. I completely forgot that the label names of an arrow Dictionary type were not stored in the type but in the DictionaryArray.\r\n\r\nThis is made me realize that it's actually pretty unpractical and I feel that handling this can add unnecessary complexity in the handling of dtypes.\r\nMore specifically:\r\n- it's not possible to create a DictionaryArray from a call to pyarrow.array with python objects, which is the function we use to convert python objects to pyarrow objects (or we would need to convert the python objects to pandas categorical series beforehand but it doesn't work for nested types)\r\n- casting nested types containing Dictionary types would require a lot of array manipulations since it's not compatible with pyarrow.array.cast\r\n\r\nI feel like the original feature request (support of pandas Categorical) should be addressable without adding so much complexity to the library.\r\n\r\nIf we admit that we don't want to deal with arrow Dictionary type, maybe we can simply convert the pandas categorical series to an int64 series and set the feature type to the right ClassLabel in `from_pandas`. We can have the reverse operation in `to_pandas`. This way we don't need to support the arrow DictionaryType and so we can keep simple/accessible code for conversion from python to arrow and also for type casting. Let me know what you think.\r\n\r\nIn the future depending on the usage of the ClassLabel types with pandas/pyarrow we might reconsider this but for now I believe this simple solution is enough.",
"I like that idea! Let me try working up a PR for this",
"OK! I just whipped up the `from_pandas()` portion of this PR, and it works, though I'm not *super* familiar with the available APIs so I'm not sure if there's a more \"vectorized\" way of doing all of these updates - so happy to get some feedback and iterate!\r\n\r\nApologies for multiple commits - I realized how to solve a few different problems right after I gave up and pushed with the intent to ask for help :-)\r\n\r\nI wanted to get some guidance on how to handle the reverse direction: I think there are two main areas to look at, `.to_pandas()` and also `.set_format('pandas')` and then pulling out a dataframe like so: `dataset[:]`. Is there a single place where I can handle both of these cases at once or do these need to be handled independently?",
"Thanks ! This is awesome :) \r\nCould you also add a test ? There is already `test_to_pandas` in test_arrow_dataset.py\r\nFeel free to complete this test to make sure it works for Categorical :)\r\n\r\nTo make it work with the \"pandas\" formating (when you do `set_format(\"pandas\")` and then query `dataset[0]`, `dataset[:]`, etc.), you can take a look and the `PandasFormatter` in formatting.py\r\nIt takes a pyarrow table as input of its formatting methods (one method for rows, one for columns and one for batches) and returns a pandas DataFrame (or a Series for the method for formatting a column). You can cast to Categorical in each one of the formatter methods and it should work directly when you use a pandas-formatted dataset.\r\n\r\nThis formatter can then also be used in `to_pandas` (currently it does `pa_table.to_pandas()` but `PandasFormatter().format_batch(pa_table)` can be used instead)."
] | 1,614,105,174,000 | 1,615,273,745,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1936",
"html_url": "https://github.com/huggingface/datasets/pull/1936",
"diff_url": "https://github.com/huggingface/datasets/pull/1936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1936.patch",
"merged_at": null
} | @lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014
The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.
Just the 4 line change below actually does seem to work:
```
>>> from datasets import Dataset
>>> import pandas as pd
>>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
>>> ds = Dataset.from_pandas(df)
>>> ds.to_pandas()
0
0 a
1 b
2 c
3 a
>>> ds.to_pandas().dtypes
0 category
dtype: object
```
save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are:
```
>>> ds.features.type
StructType(struct<0: int64>)
```
there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone
Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1936/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1935/comments | https://api.github.com/repos/huggingface/datasets/issues/1935/events | https://github.com/huggingface/datasets/pull/1935 | 814,623,827 | MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1 | 1,935 | add CoVoST2 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@patrickvonplaten \r\nI removed the mp3 files, dummy_data is much smaller now!"
] | 1,614,097,696,000 | 1,614,190,172,000 | 1,614,189,909,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1935",
"html_url": "https://github.com/huggingface/datasets/pull/1935",
"diff_url": "https://github.com/huggingface/datasets/pull/1935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1935.patch",
"merged_at": 1614189909000
} | This PR adds the CoVoST2 dataset for speech translation and ASR.
https://github.com/facebookresearch/covost#covost-2
The dataset requires manual download as the download page requests an email address and the URLs are temporary.
The dummy data is a bit bigger because of the mp3 files and 36 configs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1935/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1934/comments | https://api.github.com/repos/huggingface/datasets/issues/1934/events | https://github.com/huggingface/datasets/issues/1934 | 814,437,190 | MDU6SXNzdWU4MTQ0MzcxOTA= | 1,934 | Add Stanford Sentiment Treebank (SST) | {
"login": "patpizio",
"id": 15801338,
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patpizio",
"html_url": "https://github.com/patpizio",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"repos_url": "https://api.github.com/users/patpizio/repos",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this."
] | 1,614,084,796,000 | 1,616,089,904,000 | 1,616,089,904,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I am going to add SST:
- **Name:** The Stanford Sentiment Treebank
- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- **Data:** https://nlp.stanford.edu/sentiment/index.html
- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification
What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:
- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}
- the labels of the *sub-sentences* were included only in the training set
- the labels in the test set are obfuscated
So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.
Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.
I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1934/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1933/comments | https://api.github.com/repos/huggingface/datasets/issues/1933/events | https://github.com/huggingface/datasets/pull/1933 | 814,335,846 | MDExOlB1bGxSZXF1ZXN0NTc4MzQwMzk3 | 1,933 | Use arrow ipc file format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,076,704,000 | 1,614,076,704,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1933",
"html_url": "https://github.com/huggingface/datasets/pull/1933",
"diff_url": "https://github.com/huggingface/datasets/pull/1933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1933.patch",
"merged_at": null
} | According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample:
> We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.
Since it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https://github.com/huggingface/datasets/issues/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains.
I think it's still a good idea to start using it anyway for these reasons:
- in the future we may have speed gains
- it contains the arrow streaming format data
- it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future
- it's also the format used by arrow feather if we want to use it in the future
- it's roughly the same size as the streaming format
- it's easy to have backward compatibility with the streaming format
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1933/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1932/comments | https://api.github.com/repos/huggingface/datasets/issues/1932/events | https://github.com/huggingface/datasets/pull/1932 | 814,326,116 | MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy | 1,932 | Fix builder config creation with data_dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,075,962,000 | 1,614,077,128,000 | 1,614,077,127,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1932",
"html_url": "https://github.com/huggingface/datasets/pull/1932",
"diff_url": "https://github.com/huggingface/datasets/pull/1932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1932.patch",
"merged_at": 1614077127000
} | The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custom BuilderConfig the same as an available...")`
I fixed that by commenting the line that used to ignore the data_dir when creating the config.
It was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id.
Now creating a config with a data_dir works again @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1932/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | {
"login": "pdufter",
"id": 13961899,
"node_id": "MDQ6VXNlcjEzOTYxODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/13961899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdufter",
"html_url": "https://github.com/pdufter",
"followers_url": "https://api.github.com/users/pdufter/followers",
"following_url": "https://api.github.com/users/pdufter/following{/other_user}",
"gists_url": "https://api.github.com/users/pdufter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdufter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdufter/subscriptions",
"organizations_url": "https://api.github.com/users/pdufter/orgs",
"repos_url": "https://api.github.com/users/pdufter/repos",
"events_url": "https://api.github.com/users/pdufter/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdufter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to be resolved now.",
"I guess the `dummy_data.zip` is too large. I can reduce the languages that are contained there, but when testing it, it obviously throws an error, as not all files can be found. I guess I can either i) change the default value regarding which languages are loaded or ii) let the `_generate_examples` silently skip any language for which it cannot find files. Both solutions are not really pretty - is there another way around this?",
"Thanks for the review and the constructive comments :) ! I tried to address them, and reduced the number of lines in the dummy data to 1 to reduce its size. "
] | 1,614,067,917,000 | 1,614,592,863,000 | 1,614,592,863,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch",
"merged_at": 1614592863000
} | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !",
"> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will have dev/test splits.",
"> Cool thanks !\r\n> This looks perfect this way.\r\n> \r\n> Now we just need to update the dataset_infos.json (it contains the metadata of the dataset) and add dummy data to be able to test this script automatically.\r\n> \r\n> To update the dataset_infos.json you just need delete the current one at `./datasets/wino_biais/dataset_infos.json`, and then run this command:\r\n> \r\n> ```\r\n> datasets-cli test ./datasets/wino_biais --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> To add the dummy data there's also a tool to add them automatically.\r\n> First delete the folder at `./datasets/wino_biais/dummy` and then run\r\n> \r\n> ```\r\n> datasets-cli dummy_data ./datasets/wino_biais --auto_generate --match_text_files \"*conll\" --n_lines 15\r\n> ```\r\n> \r\n> Let me know if you have questions :)\r\n> Also don't forget to run `make style` to format the code properly.\r\n\r\nThanks for the instruction! I've updated the metadata and the dummy data and also do the formatting. Please let me know if more is needed. :)"
] | 1,614,049,660,000 | 1,617,809,096,000 | 1,617,809,096,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch",
"merged_at": 1617809096000
} | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1929/comments | https://api.github.com/repos/huggingface/datasets/issues/1929/events | https://github.com/huggingface/datasets/pull/1929 | 813,929,669 | MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4 | 1,929 | Improve typing and style and fix some inconsistencies | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq Thanks for the quick review.",
"I merged master to this branch to re-run the CI before merging :)"
] | 1,614,034,061,000 | 1,614,183,374,000 | 1,614,175,434,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1929",
"html_url": "https://github.com/huggingface/datasets/pull/1929",
"diff_url": "https://github.com/huggingface/datasets/pull/1929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1929.patch",
"merged_at": 1614175433000
} | This PR:
* improves typing (mostly more consistent use of `typing.Optional`)
* `DatasetDict.cleanup_cache_files` now correctly returns a dict
* replaces `dict()` with the corresponding literal
* uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1929/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1928/comments | https://api.github.com/repos/huggingface/datasets/issues/1928/events | https://github.com/huggingface/datasets/pull/1928 | 813,793,434 | MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4 | 1,928 | Updating old cards | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,021,964,000 | 1,614,104,365,000 | 1,614,104,365,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1928",
"html_url": "https://github.com/huggingface/datasets/pull/1928",
"diff_url": "https://github.com/huggingface/datasets/pull/1928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1928.patch",
"merged_at": 1614104365000
} | Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1928/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1927/comments | https://api.github.com/repos/huggingface/datasets/issues/1927/events | https://github.com/huggingface/datasets/pull/1927 | 813,768,935 | MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5 | 1,927 | Update README.md | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,019,894,000 | 1,614,077,565,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1927",
"html_url": "https://github.com/huggingface/datasets/pull/1927",
"diff_url": "https://github.com/huggingface/datasets/pull/1927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1927.patch",
"merged_at": null
} | Updated the info for the wino_bias dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1927/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1926/comments | https://api.github.com/repos/huggingface/datasets/issues/1926/events | https://github.com/huggingface/datasets/pull/1926 | 813,607,994 | MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy | 1,926 | Fix: Wiki_dpr - add missing scalar quantizer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,614,007,925,000 | 1,614,008,994,000 | 1,614,008,993,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1926",
"html_url": "https://github.com/huggingface/datasets/pull/1926",
"diff_url": "https://github.com/huggingface/datasets/pull/1926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1926.patch",
"merged_at": 1614008993000
} | All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.
The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.
The quantizer reduces the size of the index a lot but increases index building time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1926/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1925/comments | https://api.github.com/repos/huggingface/datasets/issues/1925/events | https://github.com/huggingface/datasets/pull/1925 | 813,600,902 | MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3 | 1,925 | Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transformers 4.3.2 with datasets installed from source from `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0/10 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe error message is hinting that it could be related to this, but I might be wrong. Any ideas?\r\n\r\n\r\nEdit: Can confirm it's working fine with datasets==1.2.0\r\n\r\nDouble Edit: Did some further digging. The issue is related to this commit: 8c5220307c33f00e01c3bf7b8. I opened a separate issue #1941 for proper tracking."
] | 1,614,007,426,000 | 1,614,216,828,000 | 1,614,008,168,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1925",
"html_url": "https://github.com/huggingface/datasets/pull/1925",
"diff_url": "https://github.com/huggingface/datasets/pull/1925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1925.patch",
"merged_at": 1614008167000
} | Fix the bugs noticed in #1915
There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).
Another issue was that setting `index_name="no_index"` didn't set `with_index` to False.
I fixed both of them and added dummy data for those configurations for testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1925/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1924/comments | https://api.github.com/repos/huggingface/datasets/issues/1924/events | https://github.com/huggingface/datasets/issues/1924 | 813,599,733 | MDU6SXNzdWU4MTM1OTk3MzM= | 1,924 | Anonymous Dataset Addition (i.e Anonymous PR?) | {
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok",
"Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?\r\nCheers\r\n",
"Sure, I think it's ok on our side",
"Yup, sounds good!"
] | 1,614,007,350,000 | 1,614,104,890,000 | null | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1924/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1923/comments | https://api.github.com/repos/huggingface/datasets/issues/1923/events | https://github.com/huggingface/datasets/pull/1923 | 813,363,472 | MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0 | 1,923 | Fix save_to_disk with relative path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,989,639,000 | 1,613,992,964,000 | 1,613,992,963,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1923",
"html_url": "https://github.com/huggingface/datasets/pull/1923",
"diff_url": "https://github.com/huggingface/datasets/pull/1923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1923.patch",
"merged_at": 1613992963000
} | As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step.
I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems.
I also fixed the issue with the target path being the temporary path.
I added a test case for relative paths as well for save_to_disk.
Thanks to @M-Salti for reporting and investigating | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1923/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1922/comments | https://api.github.com/repos/huggingface/datasets/issues/1922/events | https://github.com/huggingface/datasets/issues/1922 | 813,140,806 | MDU6SXNzdWU4MTMxNDA4MDY= | 1,922 | How to update the "wino_bias" dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !"
] | 1,613,972,379,000 | 1,613,990,159,000 | null | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1922/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1921/comments | https://api.github.com/repos/huggingface/datasets/issues/1921/events | https://github.com/huggingface/datasets/pull/1921 | 812,716,042 | MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4 | 1,921 | Standardizing datasets dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here."
] | 1,613,858,641,000 | 1,613,987,050,000 | 1,613,987,050,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1921",
"html_url": "https://github.com/huggingface/datasets/pull/1921",
"diff_url": "https://github.com/huggingface/datasets/pull/1921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1921.patch",
"merged_at": 1613987050000
} | This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets.
This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1921/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\"./squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/src/datasets/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ",
"CLosing in favor of #1923"
] | 1,613,830,959,000 | 1,613,989,811,000 | 1,613,989,811,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch",
"merged_at": null
} | Fixes #1919
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] | 1,613,830,690,000 | 1,614,793,227,000 | 1,614,793,227,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1918/comments | https://api.github.com/repos/huggingface/datasets/issues/1918/events | https://github.com/huggingface/datasets/pull/1918 | 812,541,510 | MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0 | 1,918 | Fix QA4MRE download URLs | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,806,337,000 | 1,614,000,906,000 | 1,614,000,906,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1918",
"html_url": "https://github.com/huggingface/datasets/pull/1918",
"diff_url": "https://github.com/huggingface/datasets/pull/1918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1918.patch",
"merged_at": 1614000906000
} | The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1918/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1917/comments | https://api.github.com/repos/huggingface/datasets/issues/1917/events | https://github.com/huggingface/datasets/issues/1917 | 812,390,178 | MDU6SXNzdWU4MTIzOTAxNzg= | 1,917 | UnicodeDecodeError: windows 10 machine | {
"login": "yosiasz",
"id": 900951,
"node_id": "MDQ6VXNlcjkwMDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yosiasz",
"html_url": "https://github.com/yosiasz",
"followers_url": "https://api.github.com/users/yosiasz/followers",
"following_url": "https://api.github.com/users/yosiasz/following{/other_user}",
"gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions",
"organizations_url": "https://api.github.com/users/yosiasz/orgs",
"repos_url": "https://api.github.com/users/yosiasz/repos",
"events_url": "https://api.github.com/users/yosiasz/events{/privacy}",
"received_events_url": "https://api.github.com/users/yosiasz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"upgraded to php 3.9.2 and it works!"
] | 1,613,772,785,000 | 1,613,774,471,000 | 1,613,774,428,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Windows 10
Php 3.6.8
when running
```
import datasets
oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am")
print(oscar_am["train"][0])
```
I get the following error
```
file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined>
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1917/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?",
"Sorry @lhoestq, I forgot to update the imports... :/",
"It's fine, the CI should have caught this tbh. Not sure why it did't fail"
] | 1,613,764,285,000 | 1,614,005,816,000 | 1,614,000,769,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch",
"merged_at": 1614000769000
} | Remove unused/unnecessary py_utils functions/classes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | {
"login": "nitarakad",
"id": 18504534,
"node_id": "MDQ6VXNlcjE4NTA0NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitarakad",
"html_url": "https://github.com/nitarakad",
"followers_url": "https://api.github.com/users/nitarakad/followers",
"following_url": "https://api.github.com/users/nitarakad/following{/other_user}",
"gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions",
"organizations_url": "https://api.github.com/users/nitarakad/orgs",
"repos_url": "https://api.github.com/users/nitarakad/repos",
"events_url": "https://api.github.com/users/nitarakad/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitarakad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix",
"I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !",
"Closing since this has been fixed by #1925"
] | 1,613,758,292,000 | 1,614,793,248,000 | 1,614,793,248,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`
I tried adding in flags `with_embeddings=False` and `with_index=False`:
`curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")`
But I got the following error:
`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}`
Is there anything else I need to set to download the dataset?
**UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,751,154,000 | 1,613,936,883,000 | 1,613,936,883,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"merged_at": 1613936883000
} | Fix library relative logging imports and make all datasets use library logger. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1913/comments | https://api.github.com/repos/huggingface/datasets/issues/1913/events | https://github.com/huggingface/datasets/pull/1913 | 812,127,307 | MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw | 1,913 | Add keep_linebreaks parameter to text loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?",
"Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the documentation to explain this",
"Perfect!"
] | 1,613,749,425,000 | 1,613,759,772,000 | 1,613,759,771,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1913",
"html_url": "https://github.com/huggingface/datasets/pull/1913",
"diff_url": "https://github.com/huggingface/datasets/pull/1913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1913.patch",
"merged_at": 1613759771000
} | As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset.
cc @sgugger @jncasey | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1913/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/events | https://github.com/huggingface/datasets/pull/1912 | 812,034,140 | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | 1,912 | Update: WMT - use mirror links | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"So much better - thank you for doing that, @lhoestq!",
"Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893",
"Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."
] | 1,613,742,154,000 | 1,614,174,293,000 | 1,614,174,293,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1912",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch",
"merged_at": 1614174293000
} | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/events | https://github.com/huggingface/datasets/issues/1911 | 812,009,956 | MDU6SXNzdWU4MTIwMDk5NTY= | 1,911 | Saving processed dataset running infinitely | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset/train``) or remote uri (e.g. ``s3://my-bucket/dataset/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```",
"It's been 24 hours and sadly it's still running. With not a single byte written",
"Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.",
"Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path/to/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.",
"Thanks, @lhoestq for your response. Will try your solution and let you know."
] | 1,613,740,159,000 | 1,614,065,684,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796)
```dataset._data = dataset._data.filter(...)```
It took 1 hr for the filter.
Then i use `save_to_disk()` on processed dataset and it is running forever.
I have been waiting since 8 hrs, it has not written a single byte.
Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`.
Second process is the one.
<img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png">
I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | {
"login": "ZihanWangKi",
"id": 21319243,
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihanWangKi",
"html_url": "https://github.com/ZihanWangKi",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] | 1,613,711,550,000 | 1,614,895,367,000 | 1,614,895,367,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/events | https://github.com/huggingface/datasets/issues/1907 | 811,520,569 | MDU6SXNzdWU4MTE1MjA1Njk= | 1,907 | DBPedia14 Dataset Checksum bug? | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.",
"Thanks @lhoestq! Yes, it seems back to normal after a couple of days."
] | 1,613,687,148,000 | 1,614,036,125,000 | 1,614,036,124,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, in <module>
main()
File "./conditional_classification/basic_pipeline.py", line 128, in main
corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class,
File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data
datasets = load_dataset(self.name, split=dataset_split)
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset
builder_instance.download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare
self._download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare
verify_checksums(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']
```
I've seen this has happened before in other datasets as reported in #537.
I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days.
Can you please check if there's a problem with the checksums?
Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/events | https://github.com/huggingface/datasets/issues/1906 | 811,405,274 | MDU6SXNzdWU4MTE0MDUyNzQ= | 1,906 | Feature Request: Support for Pandas `Categorical` | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?",
"Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n",
"I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python/numpy. To do so we just have to adapt the python/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)"
] | 1,613,677,565,000 | 1,614,091,130,000 | null | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
```
I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`?
e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept:
```
index_type = generate_from_arrow_type(pa_type.index_type)
value_type = generate_from_arrow_type(pa_type.value_type)
```
and then additional code points to modify:
- FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694
- A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719
- I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755
- Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775
I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1905/comments | https://api.github.com/repos/huggingface/datasets/issues/1905/events | https://github.com/huggingface/datasets/pull/1905 | 811,384,174 | MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1 | 1,905 | Standardizing datasets.dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly."
] | 1,613,675,731,000 | 1,613,858,490,000 | 1,613,858,490,000 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1905",
"html_url": "https://github.com/huggingface/datasets/pull/1905",
"diff_url": "https://github.com/huggingface/datasets/pull/1905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1905.patch",
"merged_at": null
} | This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).
This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1905/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks!"
] | 1,613,665,846,000 | 1,613,668,203,000 | 1,613,668,201,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"merged_at": 1613668200000
} | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/events | https://github.com/huggingface/datasets/pull/1903 | 811,145,531 | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | 1,903 | Initial commit for the addition of TIMIT dataset | {
"login": "vrindaprabhu",
"id": 16264631,
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrindaprabhu",
"html_url": "https://github.com/vrindaprabhu",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions",
"organizations_url": "https://api.github.com/users/vrindaprabhu/orgs",
"repos_url": "https://api.github.com/users/vrindaprabhu/repos",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vrindaprabhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@patrickvonplaten could you please review and help me close this PR?",
"@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my side!:' Will be more careful from next time! :)\r\n\r\n\r\n"
] | 1,613,658,192,000 | 1,614,591,552,000 | 1,614,591,552,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1903",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch",
"merged_at": 1614591552000
} | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten Requesting your comments, will be happy to address them! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1902/comments | https://api.github.com/repos/huggingface/datasets/issues/1902/events | https://github.com/huggingface/datasets/pull/1902 | 810,931,171 | MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1 | 1,902 | Fix setimes_2 wmt urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,641,346,000 | 1,613,642,141,000 | 1,613,642,141,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1902",
"html_url": "https://github.com/huggingface/datasets/pull/1902",
"diff_url": "https://github.com/huggingface/datasets/pull/1902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1902.patch",
"merged_at": 1613642141000
} | Continuation of #1901
Some other urls were missing https | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1902/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/events | https://github.com/huggingface/datasets/pull/1901 | 810,845,605 | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | 1,901 | Fix OPUS dataset download errors | {
"login": "YangWang92",
"id": 3883941,
"node_id": "MDQ6VXNlcjM4ODM5NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangWang92",
"html_url": "https://github.com/YangWang92",
"followers_url": "https://api.github.com/users/YangWang92/followers",
"following_url": "https://api.github.com/users/YangWang92/following{/other_user}",
"gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions",
"organizations_url": "https://api.github.com/users/YangWang92/orgs",
"repos_url": "https://api.github.com/users/YangWang92/repos",
"events_url": "https://api.github.com/users/YangWang92/events{/privacy}",
"received_events_url": "https://api.github.com/users/YangWang92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,633,981,000 | 1,613,660,840,000 | 1,613,641,161,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1901",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch",
"merged_at": 1613641161000
} | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1900/comments | https://api.github.com/repos/huggingface/datasets/issues/1900/events | https://github.com/huggingface/datasets/pull/1900 | 810,512,488 | MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3 | 1,900 | Issue #1895: Bugfix for string_to_arrow timestamp[ns] support | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!"
] | 1,613,593,564,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1900",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch",
"merged_at": 1613759231000
} | Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant:
```
def __post_init__(self):
if self.dtype == "double": # fix inferred type
self.dtype = "float64"
if self.dtype == "float": # fix inferred type
self.dtype = "float32"
```
However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that.
The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1900/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/events | https://github.com/huggingface/datasets/pull/1899 | 810,308,332 | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | 1,899 | Fix: ALT - fix duplicated examples in alt-parallel | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,577,236,000 | 1,613,582,449,000 | 1,613,582,449,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"merged_at": 1613582449000
} | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/events | https://github.com/huggingface/datasets/issues/1898 | 810,157,251 | MDU6SXNzdWU4MTAxNTcyNTE= | 1,898 | ALT dataset has repeating instances in all splits | {
"login": "10-zin",
"id": 33179372,
"node_id": "MDQ6VXNlcjMzMTc5Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/10-zin",
"html_url": "https://github.com/10-zin",
"followers_url": "https://api.github.com/users/10-zin/followers",
"following_url": "https://api.github.com/users/10-zin/following{/other_user}",
"gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/10-zin/subscriptions",
"organizations_url": "https://api.github.com/users/10-zin/orgs",
"repos_url": "https://api.github.com/users/10-zin/repos",
"events_url": "https://api.github.com/users/10-zin/events{/privacy}",
"received_events_url": "https://api.github.com/users/10-zin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting. This looks like a very bad issue. I'm looking into it",
"I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch",
"Thanks!!! works perfectly in the bleading edge master version",
"Closed by #1899"
] | 1,613,566,302,000 | 1,613,715,526,000 | 1,613,715,526,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `explore-datset` feature, for quick reference.

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/events | https://github.com/huggingface/datasets/pull/1897 | 810,113,263 | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | 1,897 | Fix PandasArrayExtensionArray conversion to native type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,562,504,000 | 1,613,567,716,000 | 1,613,567,715,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1897",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"merged_at": 1613567715000
} | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna method was wrong
2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray))
I fixed these two issues and now the conversion to native types works, and so is the export to csv.
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/events | https://github.com/huggingface/datasets/issues/1895 | 809,630,271 | MDU6SXNzdWU4MDk2MzAyNzE= | 1,895 | Bug Report: timestamp[ns] not recognized | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n",
"Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!",
"The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ",
"OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!",
"Yes you're totally right :)"
] | 1,613,507,884,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp
It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method.
Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well!
```
$ pip list # only the relevant libraries/versions
datasets 1.2.1
pandas 1.0.3
pyarrow 3.0.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for reading text, even though I found recently that we could still slightly improve speed for big datasets (see [here](https://github.com/huggingface/datasets/issues/1803)).\r\n\r\nIn terms of number of examples and example sizes, the only limit is the available disk space you have.\r\n\r\nI haven't used `psrecord` yet but it seems to be a very interesting tool for benchmarking. Currently for benchmarks we only have github actions to avoid regressions in terms of speed. But it would be cool to have benchmarks with comparisons with other dataset tools ! This would be useful to many people",
"Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ?",
"no docs haha, it's written to support integer numpy arrays.\r\n\r\nYou can build one in fairseq with, roughly:\r\n```bash\r\n\r\nwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip\r\nunzip wikitext-103-raw-v1.zip\r\nexport dd=$HOME/fairseq-py/wikitext-103-raw\r\n\r\nexport mm_dir=$HOME/mmap_wikitext2\r\nmkdir -p gpt2_bpe\r\nwget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json\r\nwget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe\r\nwget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt\r\nfor SPLIT in train valid; do \\\r\n python -m examples.roberta.multiprocessing_bpe_encoder \\\r\n --encoder-json gpt2_bpe/encoder.json \\\r\n --vocab-bpe gpt2_bpe/vocab.bpe \\\r\n --inputs /scratch/stories_small/${SPLIT}.txt \\\r\n --outputs /scratch/stories_small/${SPLIT}.bpe \\\r\n --keep-empty \\\r\n --workers 60; \\\r\ndone\r\n\r\nmkdir -p $mm_dir\r\nfairseq-preprocess \\\r\n --only-source \\\r\n --srcdict gpt2_bpe/dict.txt \\\r\n --trainpref $dd/wiki.train.bpe \\\r\n --validpref $dd/wiki.valid.bpe \\\r\n --destdir $mm_dir \\\r\n --workers 60 \\\r\n --dataset-impl mmap\r\n```\r\n\r\nI'm noticing in my benchmarking that it's much smaller on disk than arrow (200mb vs 900mb), and that both incur significant cost by increasing the number of data loader workers. \r\nThis somewhat old [post](https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html) suggests there are some gains to be had from using `pyarrow.serialize(array).tobuffer()`. I haven't yet figured out how much of this stuff `pa.Table` does under the hood.\r\n\r\nThe `MMapIndexedDataset` bottlenecks we are working on improving (by using arrow) are:\r\n1) `MMapIndexedDataset`'s index, which stores offsets, basically gets read in its entirety by each dataloading process.\r\n2) we have separate, identical, `MMapIndexedDatasets` on each dataloading worker, so there's redundancy there; we wonder if there is a way that arrow can somehow dedupe these in shared memory.\r\n\r\nIt will take me a few hours to get `MMapIndexedDataset` benchmarks out of `fairseq`/onto a branch in this repo, but I'm happy to invest the time if you're interested in collaborating on some performance hacking."
] | 1,613,505,898,000 | 1,613,587,948,000 | null | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).
Questions:
1) Is this (basically identical) performance expected?
2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?)
3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks?
Thanks in advance! Sam | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/events | https://github.com/huggingface/datasets/issues/1893 | 809,556,503 | MDU6SXNzdWU4MDk1NTY1MDM= | 1,893 | wmt19 is broken | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] | 1,613,500,798,000 | 1,614,793,322,000 | 1,614,793,322,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested
mapped = [
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested
return function(data_struct)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/events | https://github.com/huggingface/datasets/issues/1892 | 809,554,174 | MDU6SXNzdWU4MDk1NTQxNzQ= | 1,892 | request to mirror wmt datasets, as they are really slow to download | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts",
"Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download",
"I'm downloading them.\r\nI'm starting with the ones hosted on http://data.statmt.org which are the slowest ones",
"@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)",
"Closing since the urls were changed to mirror urls in #1912 ",
"Hi there! What about mirroring other datasets like [CCAligned](http://www.statmt.org/cc-aligned/) as well? All of them are really slow to download..."
] | 1,613,500,571,000 | 1,635,231,342,000 | 1,616,673,203,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1891/comments | https://api.github.com/repos/huggingface/datasets/issues/1891/events | https://github.com/huggingface/datasets/issues/1891 | 809,550,001 | MDU6SXNzdWU4MDk1NTAwMDE= | 1,891 | suggestion to improve a missing dataset error | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,500,153,000 | 1,613,500,214,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:
```
True, predict_with_generate=True)
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset
module_path, hash, resolved_file_path = prepare_module(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py.
The file is also not present on the master branch on github.
```
Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there.
The error occured when running:
```
cd examples/seq2seq
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: "
```
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1891/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1890/comments | https://api.github.com/repos/huggingface/datasets/issues/1890/events | https://github.com/huggingface/datasets/pull/1890 | 809,395,586 | MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx | 1,890 | Reformat dataset cards section titles | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,488,307,000 | 1,613,488,354,000 | 1,613,488,353,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1890",
"html_url": "https://github.com/huggingface/datasets/pull/1890",
"diff_url": "https://github.com/huggingface/datasets/pull/1890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1890.patch",
"merged_at": 1613488353000
} | Titles are formatted like [Foo](#foo) instead of just Foo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1890/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1889/comments | https://api.github.com/repos/huggingface/datasets/issues/1889/events | https://github.com/huggingface/datasets/pull/1889 | 809,276,015 | MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz | 1,889 | Implement to_dict and to_pandas for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Next step is going to add these two in the documentation ^^"
] | 1,613,479,099,000 | 1,613,673,757,000 | 1,613,673,754,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1889",
"html_url": "https://github.com/huggingface/datasets/pull/1889",
"diff_url": "https://github.com/huggingface/datasets/pull/1889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1889.patch",
"merged_at": 1613673754000
} | With options to return a generator or the full dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1889/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/events | https://github.com/huggingface/datasets/pull/1888 | 809,241,123 | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | 1,888 | Docs for adding new column on formatted dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Close #1872"
] | 1,613,475,900,000 | 1,617,112,863,000 | 1,613,476,737,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1888",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch",
"merged_at": 1613476737000
} | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/events | https://github.com/huggingface/datasets/pull/1887 | 809,229,809 | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | 1,887 | Implement to_csv for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy",
"Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)",
"Raising this error for booleans was introduced in https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...",
"I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)",
"@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) "
] | 1,613,474,849,000 | 1,613,727,719,000 | 1,613,727,719,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1887",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch",
"merged_at": 1613727719000
} | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1886/comments | https://api.github.com/repos/huggingface/datasets/issues/1886/events | https://github.com/huggingface/datasets/pull/1886 | 809,221,885 | MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz | 1,886 | Common voice | {
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have to figure out how to host them). An even more creative idea would be to host the dataset inside a torrent and figure out a way to download specific datasets from within that torrent.\r\n\r\nHere is some information about the download authorization. They are hosting the data on S3.\r\n\r\nhttps://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html\r\n\r\nHere is an example of how a download link looks.\r\n\r\nhttps://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-6.1-2020-12-11/nl.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3ND4UAQXB%2F20210217%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210217T080740Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEGIaDCC6ALh%2FwIK9ovvRdCKSBCs5WaSJNsZ2h0SnhpnWFv4yiAJHJTe%2BY6pBcCqadRMs0RABHeQ2n1QDACJ5V9WOqIHfMfT0AI%2Bfe6iFkTGLgRrJOMYpgV%2FmIBcXCjeb72r4ZvudMA8tprkSxZsEh53bJkIDQx1tXqfpz0yoefM0geD3461suEGhHnLIyiwffrUpRg%2BkNZN9%2FLZZXpF5F2pogieKKV533Jetkd1xlWOR%2Bem9R2bENu2RV563XX3JvbWxSYN9IHkVT1xwd4ZiOpUtX7%2F2RoluJUKw%2BUPpyml3J%2FOPPGdr7CyPLjqNxdq9ceRi8lRybty64XvNYZGt45VNTQ3pkTTz4VpUCJAGkgxq95Ve%2BOwW%2Fsc8JtblTFKrH11vej62NB7C0n7JPPS4SLKXHKW%2B7ZbybcNf3BnsAVouPdsGTMslcgkD81b9trnjyXJdOZkzdHUf2KcWVXVceEsZnMhcCZQ1cJpI7qXPEk8QrKCQcNByPLHmPIEdHpj9IrIBKDkl2qO7VX7CCB65WDt2eZRltOcNHXWVFXFktMdQOQztI1j0XSZz2iOX4jPKKaqz193VEytlAqmehNi8pePOnxkP9Z1SP7d3I6rayuBF3phmpHxw499tY3ECYYgoCnJ6QSFa3KxMjFmEpQlmjxuwEMHd4CDL2FJYGcCiIxbCcL1r8ZE3%2BbGdcu7PRsVCHX3Huh%2FqGIaF4h40FgteN6teyKCHKOebs4EGMipb9xmEMZ9ZbVopz4bkhLdMTrjKon9w624Xem0MTPqN7XY%2BB6lRgrW8rd4%3D&X-Amz-Signature=28eabdfce72a472a70b0f9e1e2c37fe1471b5ec8ed60614fbe900bfa97ae1ac8&X-Amz-SignedHeaders=host\r\n\r\nIt could be that we simply need to make a http-request with the right parameters and we can download the datasets.",
"> Wow, this looks great already! It's really a difficult dataset so thanks a lot for opening a PR.\r\n> I think the tagging tool is not too important for now and we can take a look at that later!\r\n> \r\n> At the moment, it would be very good to correctly generate some dummy data for all the possible languages. I think the structure of the `.tsv` file as you've noted in the PR is the one we want to use as the structure for `features = datasets.Features(`\r\n> \r\n> The splits `'Train\"`, `\"Test\"`, `\"Validation\"` look great to me! Because this is a special dataset that also has files called `\"Invalidated\"` I think the best option is to also add those as splits, _i.e._ `\"other\"`, `\"invalidated\"`, `\"reported\"`, `\"validated\"` . Those split names can be gives as shown here for example:\r\n> \r\n> https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L124\r\n> \r\n> Also putting @lhoestq in cc here to hear his opinion on the different splits. @lhoestq Common Voicie is a crowd collected dataset where if a collected data sample did not receive enough \"up_votes\" from the community -> then it is (If I understood it correctly) marked as invalid -> hence the file `\"invalidated.tsv\"`. I think this is still useful data, so I would include it what do you think?\r\n> \r\n> @BirgerMoell let me know if you have any more questions :-)\r\n\r\nI think reporting is a separate feature. People can help annotate the data and then they can report things while annotating.\r\nhttps://commonvoice.mozilla.org/sv-SE/listen\r\n\r\nHere is the interface that shows reporting and the thumbs up and down which gives upvotes and downvotes.\r\n<img src=\"https://i.imgur.com/utWjszt.png\" height=\"800px\">\r\n",
"I added splits and features. I'm not sure how you want me to generate dummy data for all the languages?",
"Hey @BirgerMoell,\r\n\r\nI tweaked your dataset file a bit to have a first working version. To test this dataset downloading script, you can do the following:\r\n\r\n- 1) Download the Common Voice Georgian dataset from https://commonvoice.mozilla.org/en/datasets (It's pretty small which is why I chose it)\r\n- 2) Run the following command using this branch: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"./../datasets/datasets/common_voice\", \"Georgian\", data_dir=\"./cv-corpus-6.1-2020-12-11/ka/\", split=\"train\")\r\n```\r\n\r\nNote that I'm loading a local version of the dataset script (`\"./../datasets/datasets/common_voice/\"` points to the folder in your branch) and that I also insert the downloaded data with the `data_dir` arg.\r\n\r\n-> You'll see that the data is correctly loaded and that `ds` contains all the information we need.\r\n\r\nNow there are a lot of different datasets on Common Voice, so it probably takes too much time to test all of those, but maybe you can test whether the current script works as well *e.g.* for Swedish, 3,4 other languages.\r\n\r\nIt would be very nice if we can use the exact same structure for all languages, meaning that we don't have to change the `datasets.Features(...)` structure depending on the language, but can use the exact same one for every language.\r\n\r\nIf everything works as expected we can then go over to cleaning the script and seeing how to add dummy data tests for it."
] | 1,613,474,170,000 | 1,615,315,891,000 | 1,615,315,891,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1886",
"html_url": "https://github.com/huggingface/datasets/pull/1886",
"diff_url": "https://github.com/huggingface/datasets/pull/1886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1886.patch",
"merged_at": 1615315891000
} | Started filling out information about the dataset and a dataset card.
To do
Create tagging file
Update the common_voice.py file with more information | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1886/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1885/comments | https://api.github.com/repos/huggingface/datasets/issues/1885/events | https://github.com/huggingface/datasets/pull/1885 | 808,881,501 | MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz | 1,885 | add missing info on how to add large files | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,432,799,000 | 1,613,492,539,000 | 1,613,475,852,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1885",
"html_url": "https://github.com/huggingface/datasets/pull/1885",
"diff_url": "https://github.com/huggingface/datasets/pull/1885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1885.patch",
"merged_at": 1613475852000
} | Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1885/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/events | https://github.com/huggingface/datasets/pull/1884 | 808,755,894 | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | 1,884 | dtype fix when using numpy arrays | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,415,325,000 | 1,627,642,878,000 | 1,627,642,878,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"merged_at": null
} | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/events | https://github.com/huggingface/datasets/pull/1883 | 808,750,623 | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | 1,883 | Add not-in-place implementations for several dataset transforms | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)",
"I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.",
"Now let's update the documentation to use the new methods x)"
] | 1,613,414,666,000 | 1,614,178,489,000 | 1,614,178,406,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1883",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch",
"merged_at": 1614178406000
} | Should we deprecate in-place versions of such methods? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/events | https://github.com/huggingface/datasets/pull/1882 | 808,716,576 | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | 1,882 | Create Remote Manager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_file.fetch(dst_file)\r\n```\r\n\r\nI have created `RemotePath` (analogue to Path) with method `.open()` that returns `FtpFile`/`HttpFile` (analogue to file-like).\r\n\r\nNow I am going to implement `RemotePath.exists()` method (analogue to the Path's method) to check if remote resource is accessible, using `Ftp/Http.head()`.",
"Quick update on this one:\r\nwe discussed offline with @albertvillanova on this PR and I think using `fsspec` can help a lot, since it already implements many parts of the abstraction we need to have nice download tools for both http and ftp (and others !)"
] | 1,613,410,584,000 | 1,615,220,110,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch",
"merged_at": null
} | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1881/comments | https://api.github.com/repos/huggingface/datasets/issues/1881/events | https://github.com/huggingface/datasets/pull/1881 | 808,578,200 | MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw | 1,881 | `list_datasets()` returns a list of strings, not objects | {
"login": "pminervini",
"id": 227357,
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pminervini",
"html_url": "https://github.com/pminervini",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"repos_url": "https://api.github.com/users/pminervini/repos",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,398,815,000 | 1,613,401,789,000 | 1,613,401,788,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1881",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch",
"merged_at": 1613401788000
} | Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1881/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1880/comments | https://api.github.com/repos/huggingface/datasets/issues/1880/events | https://github.com/huggingface/datasets/pull/1880 | 808,563,439 | MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0 | 1,880 | Update multi_woz_v22 checksums | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,397,618,000 | 1,613,398,699,000 | 1,613,398,698,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1880",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch",
"merged_at": 1613398698000
} | As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1880/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1879/comments | https://api.github.com/repos/huggingface/datasets/issues/1879/events | https://github.com/huggingface/datasets/pull/1879 | 808,541,442 | MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx | 1,879 | Replace flatten_nested | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)"
] | 1,613,395,780,000 | 1,613,759,714,000 | 1,613,759,714,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1879",
"html_url": "https://github.com/huggingface/datasets/pull/1879",
"diff_url": "https://github.com/huggingface/datasets/pull/1879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1879.patch",
"merged_at": 1613759714000
} | Replace `flatten_nested` with `NestedDataStructure.flatten`.
This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure.
Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.
I have also generalized the flattening, and now it handles multiple levels of nesting. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1879/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1878/comments | https://api.github.com/repos/huggingface/datasets/issues/1878/events | https://github.com/huggingface/datasets/pull/1878 | 808,526,883 | MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3 | 1,878 | Add LJ Speech dataset | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n\r\n2) That's perfect! Yeah good question - we're currently thinking about a better design with @lhoestq \r\n\r\n3) Again tagging @yjernite & @lhoestq here - guess we should add this license though!",
"Thanks @anton-l for adding this one :)\r\nAbout the points you mentioned:\r\n1. Sure as soon as we've updated the tag sets in https://github.com/huggingface/datasets-tagging/blob/main/task_set.json, we can update the tags in this dataset card and also in the other audio dataset card.\r\n2. For now we just try to have them as small as possible but we may switch to S3/LFS at one point indeed\r\n3. If it's not part of the license set at https://github.com/huggingface/datasets-tagging/blob/main/license_set.json we can add it to this license set\r\n\r\nFor now it's ok to have the other-* tags but we'll update them very soon",
"Let's merge this one and then we'll update the tags for the audio datasets. We'll probably also add something like this:\r\n```\r\ntype:\r\n- text\r\n- audio\r\n```\r\n\r\nThank you so much for adding this one, good job !"
] | 1,613,394,642,000 | 1,613,417,981,000 | 1,613,398,689,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1878",
"html_url": "https://github.com/huggingface/datasets/pull/1878",
"diff_url": "https://github.com/huggingface/datasets/pull/1878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1878.patch",
"merged_at": 1613398689000
} | This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/)
As requested by #1841
The ASR format is based on #1767
There are a couple of quirks that should be addressed:
- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list?
- Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo?
- The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well?
Pinging @patrickvonplaten to review | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1878/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1877/comments | https://api.github.com/repos/huggingface/datasets/issues/1877/events | https://github.com/huggingface/datasets/issues/1877 | 808,462,272 | MDU6SXNzdWU4MDg0NjIyNzI= | 1,877 | Allow concatenation of both in-memory and on-disk datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).",
"Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan",
"Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets",
"Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?",
"Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`?",
"> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https://github.com/huggingface/datasets/issues/1949\r\nHowever I don't think this will affect map."
] | 1,613,389,186,000 | 1,616,777,518,000 | 1,616,777,518,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future
One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table.
Then the dataset would be the concatenation of all these tables.
Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data.
If you have some ideas you would like to share about the design/API feel free to do so :)
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1877/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/events | https://github.com/huggingface/datasets/issues/1876 | 808,025,859 | MDU6SXNzdWU4MDgwMjU4NTk= | 1,876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | {
"login": "Vincent950129",
"id": 5945326,
"node_id": "MDQ6VXNlcjU5NDUzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vincent950129",
"html_url": "https://github.com/Vincent950129",
"followers_url": "https://api.github.com/users/Vincent950129/followers",
"following_url": "https://api.github.com/users/Vincent950129/following{/other_user}",
"gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions",
"organizations_url": "https://api.github.com/users/Vincent950129/orgs",
"repos_url": "https://api.github.com/users/Vincent950129/repos",
"events_url": "https://api.github.com/users/Vincent950129/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vincent950129/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.",
"I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']",
"This must be related to https://github.com/budzianowski/multiwoz/pull/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification."
] | 1,613,330,088,000 | 1,628,100,480,000 | 1,628,100,480,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json']
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/events | https://github.com/huggingface/datasets/pull/1875 | 807,887,267 | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | 1,875 | Adding sari metric | {
"login": "ddhruvkr",
"id": 6061911,
"node_id": "MDQ6VXNlcjYwNjE5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddhruvkr",
"html_url": "https://github.com/ddhruvkr",
"followers_url": "https://api.github.com/users/ddhruvkr/followers",
"following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}",
"gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions",
"organizations_url": "https://api.github.com/users/ddhruvkr/orgs",
"repos_url": "https://api.github.com/users/ddhruvkr/repos",
"events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddhruvkr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,277,515,000 | 1,613,577,387,000 | 1,613,577,387,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1875",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch",
"merged_at": 1613577386000
} | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/events | https://github.com/huggingface/datasets/pull/1874 | 807,786,094 | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | 1,874 | Adding Europarl Bilingual dataset | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.",
"Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help",
"I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!",
"Is there something else I should do? If not can this be integrated?",
"Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`"
] | 1,613,235,724,000 | 1,614,854,302,000 | 1,614,854,302,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1874",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch",
"merged_at": 1614854302000
} | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences).
I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1873/comments | https://api.github.com/repos/huggingface/datasets/issues/1873/events | https://github.com/huggingface/datasets/pull/1873 | 807,750,745 | MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy | 1,873 | add iapp_wiki_qa_squad | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,223,267,000 | 1,613,485,318,000 | 1,613,485,318,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1873",
"html_url": "https://github.com/huggingface/datasets/pull/1873",
"diff_url": "https://github.com/huggingface/datasets/pull/1873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1873.patch",
"merged_at": 1613485318000
} | `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
5761/742/739 questions from 1529/191/192 articles. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1873/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1872/comments | https://api.github.com/repos/huggingface/datasets/issues/1872/events | https://github.com/huggingface/datasets/issues/1872 | 807,711,935 | MDU6SXNzdWU4MDc3MTE5MzU= | 1,872 | Adding a new column to the dataset after set_format was called | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"repos_url": "https://api.github.com/users/villmow/repos",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column to be unformatted you can re-run this line:\r\n```python\r\ndata.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n```",
"Hi, thanks that solved my problem. Maybe mention that in the documentation. ",
"Ok cool :) \r\nAlso I just did a PR to mention this behavior in the documentation",
"Closed by #1888"
] | 1,613,207,675,000 | 1,617,112,905,000 | 1,617,112,905,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`).
Below some pseudo code:
```python
def augment_func(sample: Dict) -> Dict:
# do something
return {
"some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor
"some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor
"NEW_COLUMN": targets, # <-- list of strings
}
data = datasets.load_dataset(__file__, data_dir="...", split="train")
data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)
augmented_dataset = data.map(augment_func, batched=False)
for sample in augmented_dataset:
print(sample) # fails
```
and the exception:
```python
Traceback (most recent call last):
File "dataset.py", line 487, in <module>
main()
File "dataset.py", line 471, in main
for sample in augmented_dataset:
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__
yield self._getitem(
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem
outputs = self._convert_outputs(
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs
v = map_nested(command, v, **map_nested_kwargs)
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp>
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp>
return [map_nested(command, i, **map_nested_kwargs) for i in x]
File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command
return torch.tensor(x, **format_kwargs)
TypeError: new(): invalid data type 'str'
```
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/1872/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1871/comments | https://api.github.com/repos/huggingface/datasets/issues/1871/events | https://github.com/huggingface/datasets/pull/1871 | 807,697,671 | MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz | 1,871 | Add newspop dataset | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"repos_url": "https://api.github.com/users/frankier/repos",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for the changes :)\r\nmerging"
] | 1,613,201,483,000 | 1,615,198,365,000 | 1,615,198,365,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1871",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch",
"merged_at": 1615198365000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1871/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1870/comments | https://api.github.com/repos/huggingface/datasets/issues/1870/events | https://github.com/huggingface/datasets/pull/1870 | 807,306,564 | MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4 | 1,870 | Implement Dataset add_item | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"id": 6644287,
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"title": "1.7",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 3,
"state": "closed",
"created_at": 1617974191000,
"updated_at": 1622478053000,
"due_on": 1620975600000,
"closed_at": 1622478053000
} | [
"Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.",
"Sure ! I opened an issue #1877 so we can discuss this specific aspect :)",
"I am going to implement this consolidation step in #2151.",
"Sounds good !",
"I retake this PR once the consolidation step is already implemented by #2151."
] | 1,613,142,226,000 | 1,619,172,091,000 | 1,619,172,091,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1870",
"html_url": "https://github.com/huggingface/datasets/pull/1870",
"diff_url": "https://github.com/huggingface/datasets/pull/1870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1870.patch",
"merged_at": 1619172090000
} | Implement `Dataset.add_item`.
Close #1854. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1870/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1869/comments | https://api.github.com/repos/huggingface/datasets/issues/1869/events | https://github.com/huggingface/datasets/pull/1869 | 807,159,835 | MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy | 1,869 | Remove outdated commands in favor of huggingface-cli | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,129,290,000 | 1,613,146,389,000 | 1,613,146,388,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1869",
"html_url": "https://github.com/huggingface/datasets/pull/1869",
"diff_url": "https://github.com/huggingface/datasets/pull/1869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1869.patch",
"merged_at": 1613146388000
} | Removing the old user commands since `huggingface_hub` is going to be used instead.
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1869/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1868/comments | https://api.github.com/repos/huggingface/datasets/issues/1868/events | https://github.com/huggingface/datasets/pull/1868 | 807,138,159 | MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0 | 1,868 | Update oscar sizes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,613,127,335,000 | 1,613,127,787,000 | 1,613,127,786,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1868",
"html_url": "https://github.com/huggingface/datasets/pull/1868",
"diff_url": "https://github.com/huggingface/datasets/pull/1868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1868.patch",
"merged_at": 1613127786000
} | This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1868/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1867/comments | https://api.github.com/repos/huggingface/datasets/issues/1867/events | https://github.com/huggingface/datasets/issues/1867 | 807,127,181 | MDU6SXNzdWU4MDcxMjcxODE= | 1,867 | ERROR WHEN USING SET_TRANSFORM() | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/alexvaca0/followers",
"following_url": "https://api.github.com/users/alexvaca0/following{/other_user}",
"gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions",
"organizations_url": "https://api.github.com/users/alexvaca0/orgs",
"repos_url": "https://api.github.com/users/alexvaca0/repos",
"events_url": "https://api.github.com/users/alexvaca0/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexvaca0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/src/transformers/trainer.py#L442\r\n\r\nThis line sets the format to not return certain unused columns. But this has two issues:\r\n1. it forgets to also set the format_kwargs (this causes the error you got):\r\n```python\r\ndataset.set_format(type=dataset.format[\"type\"], columns=columns, format_kwargs=dataset.format[\"format_kwargs\"])\r\n```\r\n2. the Trainer wants to keep only the fields that are used as input for a model. However for a dataset with a transform, the output fields are often different from the columns fields. For example from a column \"text\" in the dataset, the strings can be transformed on-the-fly into \"input_ids\". If you want your dataset to only output certain fields and not other you must change your transform function.\r\n",
"FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.\r\n\r\n@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need to change `remove_unused_columns` to `False`. We might switch the default of that argument in the next version if that proves too bug-proof.",
"I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that \"on the fly\" tokenization of batches is slowing down TPU training to that extent?",
"I'm pretty sure this is because of padding but @sgugger might know better",
"I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything at each step instead of using the same graph, which will be very slow, so you should double check you are using padding to make everything the exact same shape. ",
"I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: ",
"In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting strings to int64 floats I guess). For that reason, I chose to use datasets to load the data as text, and then edit the Collator from Transformers to tokenize every batch it receives before processing it. That way, I'm being able to train fast, without memory breaks, without the disk being unnecessarily filled, while making use of GPUs almost all the time I'm paying for them (the map function over the whole dataset took ~15hrs, in which you're not training at all). I hope this info helps others that are looking for training a language model from scratch cheaply, I'm going to close the issue as the optimal solution I found after many experiments to the problem posted in it is explained above. ",
"Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful. "
] | 1,613,126,311,000 | 1,614,607,464,000 | 1,614,168,043,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional argument: 'transform'
[INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text.
Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn
main()
File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main
data_collator=data_collator,
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__
self._remove_unused_columns(self.train_dataset, description="training")
File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns
dataset.set_format(type=dataset.format["type"], columns=columns)
File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper
out = func(self, *args, **kwargs)
File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format
_ = get_formatter(type, **format_kwargs)
File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter
return _FORMAT_TYPES[format_type](**format_kwargs)
TypeError: __init__() missing 1 required positional argument: 'transform'
```
The code I'm using:
```{python}
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length)
datasets.set_transform(tokenize_function)
data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets["train"] if training_args.do_train else None,
eval_dataset=datasets["val"] if training_args.do_eval else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
```
I've installed from source, master branch.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1867/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/events | https://github.com/huggingface/datasets/pull/1866 | 807,017,816 | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | 1,866 | Add dataset for Financial PhraseBank | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"repos_url": "https://api.github.com/users/frankier/repos",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | 1,613,115,056,000 | 1,613,571,756,000 | 1,613,571,756,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1866",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch",
"merged_at": 1613571756000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1865/comments | https://api.github.com/repos/huggingface/datasets/issues/1865/events | https://github.com/huggingface/datasets/pull/1865 | 806,388,290 | MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2 | 1,865 | Updated OPUS Open Subtitles Dataset with metadata information | {
"login": "Valahaar",
"id": 19476123,
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Valahaar",
"html_url": "https://github.com/Valahaar",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of the dataset script, like \"./datasets/open_subtitles\". Otherwise the dataset is loaded from the master branch on github.\r\nHope that clarifies things a bit\r\n\r\nAnd of course feel free to add methods or classmethods to your builder.\r\n",
"Great! Thank you :)\r\nI'll close the issue as well."
] | 1,613,049,986,000 | 1,613,738,289,000 | 1,613,149,184,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1865",
"html_url": "https://github.com/huggingface/datasets/pull/1865",
"diff_url": "https://github.com/huggingface/datasets/pull/1865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1865.patch",
"merged_at": 1613149184000
} | Close #1844
Problems:
- I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be?
- Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss?
Questions:
- Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1865/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1864/comments | https://api.github.com/repos/huggingface/datasets/issues/1864/events | https://github.com/huggingface/datasets/issues/1864 | 806,172,843 | MDU6SXNzdWU4MDYxNzI4NDM= | 1,864 | Add Winogender Schemas | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias"
] | 1,613,031,518,000 | 1,613,031,591,000 | 1,613,031,591,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** Winogender Schemas
- **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems.
- **Paper:** https://arxiv.org/abs/1804.09301
- **Data:** https://github.com/rudinger/winogender-schemas (see data directory)
- **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1864/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1863/comments | https://api.github.com/repos/huggingface/datasets/issues/1863/events | https://github.com/huggingface/datasets/issues/1863 | 806,171,311 | MDU6SXNzdWU4MDYxNzEzMTE= | 1,863 | Add WikiCREM | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] | 1,613,031,360,000 | 1,615,102,033,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **Motivation:** Coreference resolution, common sense reasoning
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1863/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1862/comments | https://api.github.com/repos/huggingface/datasets/issues/1862/events | https://github.com/huggingface/datasets/pull/1862 | 805,722,293 | MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx | 1,862 | Fix writing GPU Faiss index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,978,323,000 | 1,612,981,068,000 | 1,612,981,067,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1862",
"html_url": "https://github.com/huggingface/datasets/pull/1862",
"diff_url": "https://github.com/huggingface/datasets/pull/1862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1862.patch",
"merged_at": 1612981067000
} | As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.
I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`
Close #1859 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1862/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1861/comments | https://api.github.com/repos/huggingface/datasets/issues/1861/events | https://github.com/huggingface/datasets/pull/1861 | 805,631,215 | MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1 | 1,861 | Fix Limit url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,971,896,000 | 1,612,973,700,000 | 1,612,973,699,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1861",
"html_url": "https://github.com/huggingface/datasets/pull/1861",
"diff_url": "https://github.com/huggingface/datasets/pull/1861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1861.patch",
"merged_at": 1612973698000
} | The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset
This PR uses the previous commit sha to download the file instead, as suggested by @Paethon
Close #1836 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1861/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1860/comments | https://api.github.com/repos/huggingface/datasets/issues/1860/events | https://github.com/huggingface/datasets/pull/1860 | 805,510,037 | MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz | 1,860 | Add loading from the Datasets Hub + add relative paths in download manager | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq/test\" dataset I added on the hub and it works fine :) ",
"Here is the PR adding support for datasets repos in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/14"
] | 1,612,963,451,000 | 1,613,157,210,000 | 1,613,157,209,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1860",
"html_url": "https://github.com/huggingface/datasets/pull/1860",
"diff_url": "https://github.com/huggingface/datasets/pull/1860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1860.patch",
"merged_at": 1613157209000
} | With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.
For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files.
You can load it using
```python
from datasets import load_dataset
d = load_dataset("lhoestq/custom_squad")
```
To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via
```python
_URLS = {
"train": "train-v1.1.json",
"dev": "dev-v1.1.json",
}
downloaded_files = dl_manager.download_and_extract(_URLS)
```
To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url).
I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1860/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1859/comments | https://api.github.com/repos/huggingface/datasets/issues/1859/events | https://github.com/huggingface/datasets/issues/1859 | 805,479,025 | MDU6SXNzdWU4MDU0NzkwMjU= | 1,859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | {
"login": "corticalstack",
"id": 3995321,
"node_id": "MDQ6VXNlcjM5OTUzMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/corticalstack",
"html_url": "https://github.com/corticalstack",
"followers_url": "https://api.github.com/users/corticalstack/followers",
"following_url": "https://api.github.com/users/corticalstack/following{/other_user}",
"gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions",
"organizations_url": "https://api.github.com/users/corticalstack/orgs",
"repos_url": "https://api.github.com/users/corticalstack/repos",
"events_url": "https://api.github.com/users/corticalstack/events{/privacy}",
"received_events_url": "https://api.github.com/users/corticalstack/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR",
"I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next release of `datasets` (in a few days)",
"Thanks for such a quick fix and merge to master, pip installed git master, tested all OK"
] | 1,612,960,860,000 | 1,612,981,932,000 | 1,612,981,067,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_available()` reports:
```
Cuda is available
cuda:0
```
Adding index, device=0 for GPU.
`dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)`
However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK.
```
def save(self, file: str):
"""Serialize the FaissIndex on disk"""
import faiss # noqa: F811
if (
hasattr(self.faiss_index, "device")
and self.faiss_index.device is not None
and self.faiss_index.device > -1
):
index = faiss.index_gpu_to_cpu(self.faiss_index)
else:
index = self.faiss_index
faiss.write_index(index, file)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1859/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1858/comments | https://api.github.com/repos/huggingface/datasets/issues/1858/events | https://github.com/huggingface/datasets/pull/1858 | 805,477,774 | MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx | 1,858 | Clean config getenvs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,960,754,000 | 1,612,972,350,000 | 1,612,972,349,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1858",
"html_url": "https://github.com/huggingface/datasets/pull/1858",
"diff_url": "https://github.com/huggingface/datasets/pull/1858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1858.patch",
"merged_at": 1612972349000
} | Following #1848
Remove double getenv calls and fix one issue with rarfile
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1858/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1857/comments | https://api.github.com/repos/huggingface/datasets/issues/1857/events | https://github.com/huggingface/datasets/issues/1857 | 805,391,107 | MDU6SXNzdWU4MDUzOTExMDc= | 1,857 | Unable to upload "community provided" dataset - 400 Client Error | {
"login": "mwrzalik",
"id": 1376337,
"node_id": "MDQ6VXNlcjEzNzYzMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwrzalik",
"html_url": "https://github.com/mwrzalik",
"followers_url": "https://api.github.com/users/mwrzalik/followers",
"following_url": "https://api.github.com/users/mwrzalik/following{/other_user}",
"gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions",
"organizations_url": "https://api.github.com/users/mwrzalik/orgs",
"repos_url": "https://api.github.com/users/mwrzalik/repos",
"events_url": "https://api.github.com/users/mwrzalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwrzalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c maybe we can make improve the error message ?"
] | 1,612,953,541,000 | 1,627,967,173,000 | 1,627,967,173,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi,
i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:
```
$ datasets-cli login
$ datasets-cli upload_dataset my_dataset
About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username
About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username
Proceed? [Y/n] Y
Uploading... This might take a while if files are large
400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign
huggingface.co migrated to a new model hosting system.
You need to upgrade to transformers v3.5+ to upload new models.
More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you!
```
I'm using the latest releases of datasets and transformers. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1857/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1856/comments | https://api.github.com/repos/huggingface/datasets/issues/1856/events | https://github.com/huggingface/datasets/issues/1856 | 805,360,200 | MDU6SXNzdWU4MDUzNjAyMDA= | 1,856 | load_dataset("amazon_polarity") NonMatchingChecksumError | {
"login": "yanxi0830",
"id": 19946372,
"node_id": "MDQ6VXNlcjE5OTQ2Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanxi0830",
"html_url": "https://github.com/yanxi0830",
"followers_url": "https://api.github.com/users/yanxi0830/followers",
"following_url": "https://api.github.com/users/yanxi0830/following{/other_user}",
"gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions",
"organizations_url": "https://api.github.com/users/yanxi0830/orgs",
"repos_url": "https://api.github.com/users/yanxi0830/repos",
"events_url": "https://api.github.com/users/yanxi0830/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanxi0830/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`",
"+1 encountering this issue as well",
"@lhoestq Hi! I encounter the same error when loading `yelp_review_full`.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_yp = load_dataset(\"yelp_review_full\")\r\n```\r\n\r\nWhen you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?",
"+1 Also encountering this issue",
"> When you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?\r\n\r\nEach file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the dataset the same day, then the quota is likely to exceed.\r\nThat's a really bad limitations of Google Drive and we should definitely find another host for these dataset than Google Drive.\r\nFor now I would suggest to wait and try again later..\r\n\r\nSo far the issue happened with CNN DailyMail, Amazon Polarity and Yelp Reviews. \r\nAre you experiencing the issue with other datasets ? @calebchiam @dtch1997 ",
"@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.",
"Same issue today with \"big_patent\", though the symptoms are slightly different.\r\n\r\nWhen running\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nload_dataset(\"big_patent\", split=\"validation\")\r\n```\r\n\r\nI get the following\r\n`FileNotFoundError: Local file \\huggingface\\datasets\\downloads\\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\\bigPatentData\\train.tar.gz doesn't exist`\r\n\r\nI had to look into `6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5` (which is a file instead of a folder) and got the following:\r\n\r\n`<!DOCTYPE html><html><head><title>Google Drive - Quota exceeded</title><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"/><link href=/static/doclist/client/css/4033072956-untrustedcontent.css rel=\"stylesheet\" nonce=\"JV0t61Smks2TEKdFCGAUFA\"><link rel=\"icon\" href=\"//ssl.gstatic.com/images/branding/product/1x/drive_2020q4_32dp.png\"/><style nonce=\"JV0t61Smks2TEKdFCGAUFA\">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\r\n</style><script nonce=\"iNUHigT+ENVQ3UZrLkFtRw\"></script></head><body><div id=gbar><nobr><a target=_blank class=gb1 href=\"https://www.google.fr/webhp?tab=ow\">Search</a> <a target=_blank class=gb1 href=\"http://www.google.fr/imghp?hl=en&tab=oi\">Images</a> <a target=_blank class=gb1 href=\"https://maps.google.fr/maps?hl=en&tab=ol\">Maps</a> <a target=_blank class=gb1 href=\"https://play.google.com/?hl=en&tab=o8\">Play</a> <a target=_blank class=gb1 href=\"https://www.youtube.com/?gl=FR&tab=o1\">YouTube</a> <a target=_blank class=gb1 href=\"https://news.google.com/?tab=on\">News</a> <a target=_blank class=gb1 href=\"https://mail.google.com/mail/?tab=om\">Gmail</a> <b class=gb1>Drive</b> <a target=_blank class=gb1 style=\"text-decoration:none\" href=\"https://www.google.fr/intl/en/about/products?tab=oh\"><u>More</u> »</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a target=\"_self\" href=\"/settings?hl=en_US\" class=gb4>Settings</a> | <a target=_blank href=\"//support.google.com/drive/?p=web_home&hl=en_US\" class=gb4>Help</a> | <a target=_top id=gb_70 href=\"https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://drive.google.com/uc%3Fexport%3Ddownload%26id%3D1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa&service=writely&ec=GAZAMQ\" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div><div class=\"uc-main\"><div id=\"uc-text\"><p class=\"uc-error-caption\">Sorry, you can't view or download this file at this time.</p><p class=\"uc-error-subcaption\">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.</p></div></div><div class=\"uc-footer\"><hr class=\"uc-footer-divider\">© 2021 Google - <a class=\"goog-link\" href=\"//support.google.com/drive/?p=web_home\">Help</a> - <a class=\"goog-link\" href=\"//support.google.com/drive/bin/answer.py?hl=en_US&answer=2450387\">Privacy & Terms</a></div></body></html>`",
"A similar issue arises when trying to stream the dataset\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> iter_dset = load_dataset(\"amazon_polarity\", split=\"test\", streaming=True)\r\n>>> iter(iter_dset).__next__()\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in nti(s)\r\n 186 s = nts(s, \"ascii\", \"strict\")\r\n--> 187 n = int(s.strip() or \"0\", 8)\r\n 188 except ValueError:\r\n\r\nValueError: invalid literal for int() with base 8: 'e nonce='\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nInvalidHeaderError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in next(self)\r\n 2288 try:\r\n-> 2289 tarinfo = self.tarinfo.fromtarfile(self)\r\n 2290 except EOFHeaderError as e:\r\n\r\n~\\lib\\tarfile.py in fromtarfile(cls, tarfile)\r\n 1094 buf = tarfile.fileobj.read(BLOCKSIZE)\r\n-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n 1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE\r\n\r\n~\\lib\\tarfile.py in frombuf(cls, buf, encoding, errors)\r\n 1036\r\n-> 1037 chksum = nti(buf[148:156])\r\n 1038 if chksum not in calc_chksums(buf):\r\n\r\n~\\lib\\tarfile.py in nti(s)\r\n 188 except ValueError:\r\n--> 189 raise InvalidHeaderError(\"invalid header\")\r\n 190 return n\r\n\r\nInvalidHeaderError: invalid header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadError Traceback (most recent call last)\r\n<ipython-input-5-6b9058341b2b> in <module>\r\n----> 1 iter(iter_dset).__next__()\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 363\r\n 364 def __iter__(self):\r\n--> 365 for key, example in self._iter():\r\n 366 if self.features:\r\n 367 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 360 else:\r\n 361 ex_iterable = self._ex_iterable\r\n--> 362 yield from ex_iterable\r\n 363\r\n 364 def __iter__(self):\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 77\r\n 78 def __iter__(self):\r\n---> 79 yield from self.generate_examples_fn(**self.kwargs)\r\n 80\r\n 81 def shuffle_data_sources(self, seed: Optional[int]) -> \"ExamplesIterable\":\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\amazon_polarity\\56923eeb72030cb6c4ea30c8a4e1162c26b25973475ac1f44340f0ec0f2936f4\\amazon_polarity.py in _generate_examples(self, filepath, files)\r\n 114 def _generate_examples(self, filepath, files):\r\n 115 \"\"\"Yields examples.\"\"\"\r\n--> 116 for path, f in files:\r\n 117 if path == filepath:\r\n 118 lines = (line.decode(\"utf-8\") for line in f)\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in __iter__(self)\r\n 616\r\n 617 def __iter__(self):\r\n--> 618 yield from self.generator(*self.args, **self.kwargs)\r\n 619\r\n 620\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_urlpath(cls, urlpath, use_auth_token)\r\n 644 ) -> Generator[Tuple, None, None]:\r\n 645 with xopen(urlpath, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 646 yield from cls._iter_from_fileobj(f)\r\n 647\r\n 648 @classmethod\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_fileobj(cls, f)\r\n 624 @classmethod\r\n 625 def _iter_from_fileobj(cls, f) -> Generator[Tuple, None, None]:\r\n--> 626 stream = tarfile.open(fileobj=f, mode=\"r|*\")\r\n 627 for tarinfo in stream:\r\n 628 file_path = tarinfo.name\r\n\r\n~\\lib\\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)\r\n 1603 stream = _Stream(name, filemode, comptype, fileobj, bufsize)\r\n 1604 try:\r\n-> 1605 t = cls(name, filemode, stream, **kwargs)\r\n 1606 except:\r\n 1607 stream.close()\r\n\r\n~\\lib\\tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)\r\n 1484 if self.mode == \"r\":\r\n 1485 self.firstmember = None\r\n-> 1486 self.firstmember = self.next()\r\n 1487\r\n 1488 if self.mode == \"a\":\r\n\r\n~\\lib\\tarfile.py in next(self)\r\n 2299 continue\r\n 2300 elif self.offset == 0:\r\n-> 2301 raise ReadError(str(e))\r\n 2302 except EmptyHeaderError:\r\n 2303 if self.offset == 0:\r\n\r\nReadError: invalid header\r\n\r\n```"
] | 1,612,951,256,000 | 1,645,113,520,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.
To reproduce:
```
load_dataset("amazon_polarity")
```
This will give the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-3-8559a03fe0f8> in <module>()
----> 1 dataset = load_dataset("amazon_polarity")
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1856/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1855/comments | https://api.github.com/repos/huggingface/datasets/issues/1855/events | https://github.com/huggingface/datasets/pull/1855 | 805,256,579 | MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3 | 1,855 | Minor fix in the docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,942,063,000 | 1,612,960,389,000 | 1,612,960,389,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1855",
"html_url": "https://github.com/huggingface/datasets/pull/1855",
"diff_url": "https://github.com/huggingface/datasets/pull/1855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1855.patch",
"merged_at": 1612960389000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1855/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1854/comments | https://api.github.com/repos/huggingface/datasets/issues/1854/events | https://github.com/huggingface/datasets/issues/1854 | 805,204,397 | MDU6SXNzdWU4MDUyMDQzOTc= | 1,854 | Feature Request: Dataset.add_item | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\nds = Dataset.from_dict(data)\r\nassert (ds[\"input_ids\"][0] == np.array([4,4,2])).all()\r\n```",
"Hi @sshleifer :) \r\n\r\nWe don't have methods like `Dataset.add_batch` or `Dataset.add_entry/add_item` yet.\r\nBut that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ?\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\ntokenized = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\n# API suggestion (not available yet)\r\nd = Dataset()\r\nfor input_ids in tokenized:\r\n d.add_item({\"input_ids\": input_ids})\r\n\r\nprint(d[0][\"input_ids\"])\r\n# [4, 4, 2]\r\n```\r\n\r\nCurrently you can define a dataset with what @albertvillanova suggest, or via a generator using dataset builders. It's also possible to [concatenate datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets).",
"Your API looks perfect @lhoestq, thanks!"
] | 1,612,937,160,000 | 1,619,172,090,000 | 1,619,172,090,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`.
Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries.
### Desired API
```python
import numpy as np
tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])
def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset:
"""FIXME"""
dataset = EmptyDataset()
for t in tokenized: dataset.append(t)
return dataset
ds = build_dataset_from_tokenized(tokenized)
assert (ds[0] == np.array([4,4,2])).all()
```
### What I tried
grep, google for "add one entry at a time", "datasets.append"
### Current Code
This code achieves the same result but doesn't fit into the `add_item` abstraction.
```python
dataset = load_dataset('text', data_files={'train': 'train.txt'})
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096)
def tokenize_function(examples):
ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids']
return {'input_ids': [x[1:] for x in ids]}
ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache)
print(ds['train'][0]) => np array
```
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1854/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/events | https://github.com/huggingface/datasets/pull/1853 | 804,791,166 | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | 1,853 | Configure library root logger at the module level | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,894,272,000 | 1,612,960,354,000 | 1,612,960,354,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1853",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch",
"merged_at": 1612960354000
} | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1852/comments | https://api.github.com/repos/huggingface/datasets/issues/1852/events | https://github.com/huggingface/datasets/pull/1852 | 804,633,033 | MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1 | 1,852 | Add Arabic Speech Corpus | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,882,946,000 | 1,613,038,735,000 | 1,613,038,735,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1852",
"html_url": "https://github.com/huggingface/datasets/pull/1852",
"diff_url": "https://github.com/huggingface/datasets/pull/1852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1852.patch",
"merged_at": 1613038734000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1852/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/events | https://github.com/huggingface/datasets/pull/1851 | 804,523,174 | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | 1,851 | set bert_score version dependency | {
"login": "pvl",
"id": 3596,
"node_id": "MDQ6VXNlcjM1OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvl",
"html_url": "https://github.com/pvl",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_url": "https://api.github.com/users/pvl/following{/other_user}",
"gists_url": "https://api.github.com/users/pvl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvl/subscriptions",
"organizations_url": "https://api.github.com/users/pvl/orgs",
"repos_url": "https://api.github.com/users/pvl/repos",
"events_url": "https://api.github.com/users/pvl/events{/privacy}",
"received_events_url": "https://api.github.com/users/pvl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,875,067,000 | 1,612,880,508,000 | 1,612,880,508,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1851",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch",
"merged_at": 1612880508000
} | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1850/comments | https://api.github.com/repos/huggingface/datasets/issues/1850/events | https://github.com/huggingface/datasets/pull/1850 | 804,412,249 | MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx | 1,850 | Add cord 19 dataset | {
"login": "ggdupont",
"id": 5583410,
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggdupont",
"html_url": "https://github.com/ggdupont",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions",
"organizations_url": "https://api.github.com/users/ggdupont/orgs",
"repos_url": "https://api.github.com/users/ggdupont/repos",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggdupont/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129",
"@lhoestq FYI",
"Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today",
"Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging"
] | 1,612,866,128,000 | 1,612,883,786,000 | 1,612,883,786,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1850",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch",
"merged_at": 1612883785000
} | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### Extras:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1850/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1849/comments | https://api.github.com/repos/huggingface/datasets/issues/1849/events | https://github.com/huggingface/datasets/issues/1849 | 804,292,971 | MDU6SXNzdWU4MDQyOTI5NzE= | 1,849 | Add TIMIT | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n",
"Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L93 (obviously replacing all the naming and links correctly...) and then you can list all possible outputs in the features dict: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L104 (words, phonemes should probably be of kind `datasets.Sequence(datasets.Value(\"string\"))` and texts I think should be of type `\"text\": datasets.Value(\"string\")`.\r\n\r\nWhen you've opened a first PR, I think it'll be much easier for us to take a look together :-) ",
"I am sorry! I created the PR [#1903](https://github.com/huggingface/datasets/pull/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!"
] | 1,612,855,781,000 | 1,615,787,977,000 | 1,615,787,977,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *TIMIT*
- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT
- **Data:** *https://deepai.org/dataset/timit*
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1849/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/events | https://github.com/huggingface/datasets/pull/1848 | 803,826,506 | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | 1,848 | Refactoring: Create config module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,809,831,000 | 1,612,960,175,000 | 1,612,960,175,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1848",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch",
"merged_at": 1612960175000
} | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/events | https://github.com/huggingface/datasets/pull/1847 | 803,824,694 | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | 1,847 | [Metrics] Add word error metric metric | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Feel free to merge once the CI is all green ;)"
] | 1,612,809,675,000 | 1,612,893,201,000 | 1,612,893,201,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1847",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch",
"merged_at": 1612893201000
} | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/events | https://github.com/huggingface/datasets/pull/1846 | 803,806,380 | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | 1,846 | Make DownloadManager downloaded/extracted paths accessible | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...",
"There could be several situations:\r\n- download a file with no extraction\r\n- download a file and extract it\r\n- download a file, extract it and then inside the output folder extract some more files\r\n- extract a local file (for datasets with data that are manually downloaded for example)\r\n- extract a local file, and then inside the output folder extract some more files\r\n\r\nSo I think it's ok to have `downloaded_paths` as a dict url -> downloaded_path and `extracted_paths` as a dict local_path -> extracted_path.",
"OK. I am refactoring this. I have opened #1879, as an intermediate step..."
] | 1,612,808,082,000 | 1,614,262,218,000 | 1,614,262,218,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1846",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch",
"merged_at": 1614262218000
} | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/events | https://github.com/huggingface/datasets/pull/1845 | 803,714,493 | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | 1,845 | Enable logging propagation and remove logging handler | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- it is the end user who has to implement any custom handlers\r\n- indeed, the previous logging problem with TensorFlow was due to the fact that absl did not follow best practices and had implemented a custom handler\r\n\r\nOur errors/warnings will be displayed anyway, even if we do not implement any custom handler. Since Python 3.2, logging has a built-in \"default\" handler (logging.lastResort) with the expected default behavior (sending error/warning messages to sys.stderr), which is used only if the end user has not configured any custom handler."
] | 1,612,801,333,000 | 1,612,880,558,000 | 1,612,880,557,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1845",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch",
"merged_at": 1612880557000
} | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library):
> It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements.
It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management.
Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`.
cc @albertvillanova this should let you use capsys/caplog in pytest
cc @LysandreJik @sgugger if you want to do the same in `transformers` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/events | https://github.com/huggingface/datasets/issues/1844 | 803,588,125 | MDU6SXNzdWU4MDM1ODgxMjU= | 1,844 | Update Open Subtitles corpus with original sentence IDs | {
"login": "Valahaar",
"id": 19476123,
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Valahaar",
"html_url": "https://github.com/Valahaar",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L103)",
"Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang/year/imdb_id/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps://www.imdb.com/title/tt7006210/, https://www.opensubtitles.org/en/subtitles/7063319 and https://www.opensubtitles.org/en/subtitles/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n",
"I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.",
"Thanks for improving it @Valahaar :) ",
"Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?",
"Merged in #1865, closing. Thanks :)"
] | 1,612,792,513,000 | 1,613,151,538,000 | 1,613,151,538,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts.
I think I should tag @abhishekkrthakur as he's the one who added it in the first place.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1843/comments | https://api.github.com/repos/huggingface/datasets/issues/1843/events | https://github.com/huggingface/datasets/issues/1843 | 803,565,393 | MDU6SXNzdWU4MDM1NjUzOTM= | 1,843 | MustC Speech Translation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ",
"That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `datasets/MuST-C` instead?\r\n\r\nDescription: \r\n_MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations._\r\n\r\nPaper: https://www.aclweb.org/anthology/N19-1202.pdf\r\n\r\nDataset: https://ict.fbk.eu/must-c/ (One needs to fill out a short from to download the data, but it's very easy).\r\n\r\nIt would be awesome if you're interested in adding this datates. I'm very happy to guide you through the PR! I think the easiest way to start would probably be to read [this README on how to add a dataset](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) and open a PR. Think you can copy & paste some code from:\r\n\r\n- Librispeech_asr: https://github.com/huggingface/datasets/blob/master/datasets/librispeech_asr/librispeech_asr.py\r\n- Flores Translation: https://github.com/huggingface/datasets/blob/master/datasets/flores/flores.py\r\n\r\nThink all the rest can be handled on the PR :-) ",
"Hi @patrickvonplaten \r\nI have tried downloading this dataset, but the connection seems to reset all the time. I have tried it via the browser, wget, and using gdown . But it gives me an error message. _\"The server is busy or down, pls try again\"_ (rephrasing the message here)\r\n\r\nI have completed adding 4 datasets in the previous data sprint (including the IWSLT dataset #1676 ) ...so just checking if you are able to download it at your end. Otherwise will write to the dataset authors to update the links. \r\n\r\n\r\n\r\n\r\n",
"Let me check tomorrow! Thanks for leaving this message!",
"cc @patil-suraj for notification ",
"@skyprince999, I think I'm getting the same error you're getting :-/\r\n\r\n```\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nIt would be great if you could write the authors to see whether they can fix it.\r\nAlso cc @lhoestq - do you think we could mirror the dataset? ",
"Also there are huge those datasets. Think downloading MuST-C v1.2 amounts to ~ 1000GB... because there are 14 possible configs each around 60-70GB. I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in `datasets` no? cc @lhoestq ",
"> Also cc @lhoestq - do you think we could mirror the dataset?\r\n\r\nYes we can mirror it if the authors are fine with it. You can create a dataset repo on huggingface.co (possibly under the relevant org) and add the mirrored data files.\r\n\r\n> I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in datasets no? cc @lhoestq\r\n\r\nIf there are different download links for each configuration we can make the dataset builder download only the files related to the requested configuration.",
"I have written to the dataset authors, highlighting this issue. Waiting for their response. \r\n\r\nUpdate on 25th Feb: \r\nThe authors have replied back, they are updating the download link and will revert back shortly! \r\n\r\n```\r\nfirst of all thanks a lot for being interested in MuST-C and for building the data-loader.\r\n\r\nBefore answering your request, I'd like to clarify that the creation, maintenance, and expansion of MuST-c are not supported by any funded project, so this means that we need to find economic support for all these activities. This also includes permanently moving all the data to AWS or GCP. We are working at this with the goal of facilitating the use of MuST-C, but this is not something that can happen today. We hope to have some news ASAP and you will be among the first to be informed.\r\n\r\nI hope you understand our situation.\r\n```\r\n\r\n",
"Awesome, actually @lhoestq let's just ask the authors if we should host the dataset no? They could just use our links then as well for their website - what do you think? Is it fine to use our AWS dataset storage also as external links? ",
"Yes definitely. Shall we suggest them to create a dataset repository under their org on huggingface.co ? @julien-c \r\nThe dataset is around 1TB",
"Sounds good! \r\n\r\nOrder of magnitude is storage costs ~$20 per TB per month (not including bandwidth). \r\n\r\nHappy to provide this to the community as I feel this is an important dataset. Let us know what the authors want to do!\r\n\r\n",
"Great! @skyprince999, do you think you could ping the authors here or link to this thread? I think it could be a cool idea to host the dataset on our side then",
"Done. They replied back, and they want to have a call over a meet/ skype. Is that possible ? \r\nBtw @patrickvonplaten you are looped in that email (_pls check you gmail account_) ",
"Hello! Any news on this?",
"@gegallego there were some concerns regarding dataset usage & attribution by a for-profit company, so couldn't take it forward. Also the download links were unstable. \r\nBut I guess if you want to test the fairseq benchmarks, you can connect with them directly for downloading the dataset. ",
"Yes, that dataset is not easy to download... I had to copy it to my Google Drive and use `rsync` to be able to download it.\r\nHowever, we could add the dataset with a manual download, right?",
"yes that is possible. I couldn't unfortunately complete this PR, If you would like to add it, please feel free to do it. "
] | 1,612,790,865,000 | 1,621,004,014,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *IWSLT19*
- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*
- **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation*
- **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2"
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1843/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1842/comments | https://api.github.com/repos/huggingface/datasets/issues/1842/events | https://github.com/huggingface/datasets/issues/1842 | 803,563,149 | MDU6SXNzdWU4MDM1NjMxNDk= | 1,842 | Add AMI Corpus | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,790,700,000 | 1,612,855,576,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *AMI*
- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/
- **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2)
- **Motivation:** Important speech dataset
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1842/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1841/comments | https://api.github.com/repos/huggingface/datasets/issues/1841/events | https://github.com/huggingface/datasets/issues/1841 | 803,561,123 | MDU6SXNzdWU4MDM1NjExMjM= | 1,841 | Add ljspeech | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,790,546,000 | 1,615,787,942,000 | 1,615,787,942,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)*
- **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/
- **Data:** *https://keithito.com/LJ-Speech-Dataset/*
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1841/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.",
"Let me know if you have any other questions",
"I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886",
"Awesome! I left a longer comment on the PR :-)",
"I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?",
"Will me merged next week - we're working on it :-)",
"Common voice still appears to be a 6.1. Is the plan still to upgrade to 7.0?",
"We actually already have the code and everything ready to add Common Voice 7.0 to `datasets` but are still waiting for the common voice authors to give us the green light :-) \r\n\r\nAlso gently pinging @phirework and @milupo here",
"Common Voice 7.0 is available here now: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0"
] | 1,612,790,465,000 | 1,641,399,591,000 | 1,615,787,781,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1839/comments | https://api.github.com/repos/huggingface/datasets/issues/1839/events | https://github.com/huggingface/datasets/issues/1839 | 803,559,164 | MDU6SXNzdWU4MDM1NTkxNjQ= | 1,839 | Add Voxforge | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,612,790,396,000 | 1,612,790,911,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *voxforge*
- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.*
- **Paper:** *Homepage*: http://www.voxforge.org/
- **Data:** *http://www.voxforge.org/home/downloads*
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1839/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/events | https://github.com/huggingface/datasets/issues/1838 | 803,557,521 | MDU6SXNzdWU4MDM1NTc1MjE= | 1,838 | Add tedlium | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0"
] | 1,612,790,272,000 | 1,617,983,861,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/
- **Data:** http://www.openslr.org/7/
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1837/comments | https://api.github.com/repos/huggingface/datasets/issues/1837/events | https://github.com/huggingface/datasets/issues/1837 | 803,555,650 | MDU6SXNzdWU4MDM1NTU2NTA= | 1,837 | Add VCTK | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me know! Otherwise, I'll try to write up a PR in the coming days.",
"That sounds great @jaketae - let me know if you need any help i.e. feel free to ping me on a first PR :-)"
] | 1,612,790,128,000 | 1,640,703,908,000 | 1,640,703,908,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.*
- **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443
- **Data:** https://datashare.ed.ac.uk/handle/10283/3443
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1837/timeline | null | false |
Subsets and Splits