url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.15B
| node_id
stringlengths 18
32
| number
int64 1
3.77k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,645B
| updated_at
int64 1,587B
1,645B
| closed_at
int64 1,587B
1,645B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2350/comments | https://api.github.com/repos/huggingface/datasets/issues/2350/events | https://github.com/huggingface/datasets/issues/2350 | 889,580,247 | MDU6SXNzdWU4ODk1ODAyNDc= | 2,350 | `FaissIndex.save` throws error on GPU | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"
] | 1,620,790,916,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2350/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2349/comments | https://api.github.com/repos/huggingface/datasets/issues/2349/events | https://github.com/huggingface/datasets/pull/2349 | 888,586,018 | MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3 | 2,349 | Update task_ids for Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,765,873,000 | 1,621,248,794,000 | 1,621,248,514,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"merged_at": 1621248514000
} | This "other-other-knowledge-base" task is better suited for the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2349/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2348/comments | https://api.github.com/repos/huggingface/datasets/issues/2348/events | https://github.com/huggingface/datasets/pull/2348 | 887,927,737 | MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4 | 2,348 | Add tests for dataset cards | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq\r\n\r\nShould I remove the scripts? or atleast remove running them from the CircleCI config?\r\n\r\nAlso, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata.",
"Also feel free to remove the scripts from the CI and also remove the scripts files :)"
] | 1,620,753,267,000 | 1,621,599,047,000 | 1,621,599,047,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2348",
"html_url": "https://github.com/huggingface/datasets/pull/2348",
"diff_url": "https://github.com/huggingface/datasets/pull/2348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2348.patch",
"merged_at": 1621599047000
} | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2348/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2347/comments | https://api.github.com/repos/huggingface/datasets/issues/2347/events | https://github.com/huggingface/datasets/issues/2347 | 887,404,868 | MDU6SXNzdWU4ODc0MDQ4Njg= | 2,347 | Add an API to access the language and pretty name of a dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).",
"That works for me!",
"maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?",
"What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.",
"hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)"
] | 1,620,742,208,000 | 1,621,589,206,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2347/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2346/comments | https://api.github.com/repos/huggingface/datasets/issues/2346/events | https://github.com/huggingface/datasets/pull/2346 | 886,632,114 | MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3 | 2,346 | Add Qasper Dataset | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I saw that the README [template](https://github.com/huggingface/datasets/blob/master/templates/README.md) changed while I was working on this 😅 Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.\r\nready for review @lhoestq "
] | 1,620,725,144,000 | 1,621,340,908,000 | 1,621,340,908,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2346",
"html_url": "https://github.com/huggingface/datasets/pull/2346",
"diff_url": "https://github.com/huggingface/datasets/pull/2346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2346.patch",
"merged_at": 1621340907000
} | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP ♻️ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2346/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2345/comments | https://api.github.com/repos/huggingface/datasets/issues/2345/events | https://github.com/huggingface/datasets/issues/2345 | 886,586,872 | MDU6SXNzdWU4ODY1ODY4NzI= | 2,345 | [Question] How to move and reuse preprocessed dataset? | {
"login": "AtmaHou",
"id": 15045402,
"node_id": "MDQ6VXNlcjE1MDQ1NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtmaHou",
"html_url": "https://github.com/AtmaHou",
"followers_url": "https://api.github.com/users/AtmaHou/followers",
"following_url": "https://api.github.com/users/AtmaHou/following{/other_user}",
"gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions",
"organizations_url": "https://api.github.com/users/AtmaHou/orgs",
"repos_url": "https://api.github.com/users/AtmaHou/repos",
"events_url": "https://api.github.com/users/AtmaHou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AtmaHou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq @LysandreJik",
"<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n",
"Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same",
"> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~"
] | 1,620,724,157,000 | 1,623,386,351,000 | 1,623,386,351,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset without loading cache.
I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M.
What is the proper way to do this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2345/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/alexvaca0/followers",
"following_url": "https://api.github.com/users/alexvaca0/following{/other_user}",
"gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions",
"organizations_url": "https://api.github.com/users/alexvaca0/orgs",
"repos_url": "https://api.github.com/users/alexvaca0/repos",
"events_url": "https://api.github.com/users/alexvaca0/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexvaca0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n"
] | 1,620,688,570,000 | 1,620,721,488,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Additional context**
If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2343/comments | https://api.github.com/repos/huggingface/datasets/issues/2343/events | https://github.com/huggingface/datasets/issues/2343 | 883,208,539 | MDU6SXNzdWU4ODMyMDg1Mzk= | 2,343 | Columns are removed before or after map function applied? | {
"login": "taghizad3h",
"id": 8199406,
"node_id": "MDQ6VXNlcjgxOTk0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taghizad3h",
"html_url": "https://github.com/taghizad3h",
"followers_url": "https://api.github.com/users/taghizad3h/followers",
"following_url": "https://api.github.com/users/taghizad3h/following{/other_user}",
"gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions",
"organizations_url": "https://api.github.com/users/taghizad3h/orgs",
"repos_url": "https://api.github.com/users/taghizad3h/repos",
"events_url": "https://api.github.com/users/taghizad3h/events{/privacy}",
"received_events_url": "https://api.github.com/users/taghizad3h/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,614,180,000 | 1,620,614,180,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2343/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2342/comments | https://api.github.com/repos/huggingface/datasets/issues/2342/events | https://github.com/huggingface/datasets/pull/2342 | 882,981,420 | MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3 | 2,342 | Docs - CER above 1 | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,603,660,000 | 1,620,653,640,000 | 1,620,653,640,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2342",
"html_url": "https://github.com/huggingface/datasets/pull/2342",
"diff_url": "https://github.com/huggingface/datasets/pull/2342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2342.patch",
"merged_at": 1620653640000
} | CER can actually be greater than 1. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2342/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2341/comments | https://api.github.com/repos/huggingface/datasets/issues/2341/events | https://github.com/huggingface/datasets/pull/2341 | 882,370,933 | MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2 | 2,341 | Added the Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for approving it!"
] | 1,620,569,859,000 | 1,620,724,619,000 | 1,620,724,619,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2341",
"html_url": "https://github.com/huggingface/datasets/pull/2341",
"diff_url": "https://github.com/huggingface/datasets/pull/2341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2341.patch",
"merged_at": 1620724618000
} | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2341/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2340/comments | https://api.github.com/repos/huggingface/datasets/issues/2340/events | https://github.com/huggingface/datasets/pull/2340 | 882,370,824 | MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx | 2,340 | More consistent copy logic | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,569,853,000 | 1,620,723,513,000 | 1,620,723,513,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2340",
"html_url": "https://github.com/huggingface/datasets/pull/2340",
"diff_url": "https://github.com/huggingface/datasets/pull/2340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2340.patch",
"merged_at": 1620723513000
} | Use `info.copy()` instead of `copy.deepcopy(info)`.
`Features.copy` now creates a deep copy. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2340/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2338/comments | https://api.github.com/repos/huggingface/datasets/issues/2338/events | https://github.com/huggingface/datasets/pull/2338 | 882,046,077 | MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx | 2,338 | fixed download link for web_science | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,551,540,000 | 1,620,653,753,000 | 1,620,653,753,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2338",
"html_url": "https://github.com/huggingface/datasets/pull/2338",
"diff_url": "https://github.com/huggingface/datasets/pull/2338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2338.patch",
"merged_at": 1620653753000
} | Fixes #2337. Should work with:
`dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2338/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2337/comments | https://api.github.com/repos/huggingface/datasets/issues/2337/events | https://github.com/huggingface/datasets/issues/2337 | 881,610,567 | MDU6SXNzdWU4ODE2MTA1Njc= | 2,337 | NonMatchingChecksumError for web_of_science dataset | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! "
] | 1,620,525,722,000 | 1,620,653,753,000 | 1,620,653,753,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results in OSError.
>OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt'
```python
dataset = load_dataset('web_of_science', 'WOS5736')
```
There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'
datasets 1.6.2
python 3.7.10
Ubuntu 18.04.5 LTS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2337/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2336/comments | https://api.github.com/repos/huggingface/datasets/issues/2336/events | https://github.com/huggingface/datasets/pull/2336 | 881,298,783 | MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5 | 2,336 | Fix overflow issue in interpolation search | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge). \r\n\r\n@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho.",
"Hi ! Thanks for the fix.\r\nUnfortunately in terms of speed this is not acceptable :/\r\nThe `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.\r\n\r\nWould it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?\r\nOn windows machines the default is int32 unfortunately so we have to force the dtype to be int64\r\n\r\n",
"Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity."
] | 1,620,507,096,000 | 1,620,653,347,000 | 1,620,653,172,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2336",
"html_url": "https://github.com/huggingface/datasets/pull/2336",
"diff_url": "https://github.com/huggingface/datasets/pull/2336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2336.patch",
"merged_at": 1620653172000
} | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2336/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2335/comments | https://api.github.com/repos/huggingface/datasets/issues/2335/events | https://github.com/huggingface/datasets/issues/2335 | 881,291,887 | MDU6SXNzdWU4ODEyOTE4ODc= | 2,335 | Index error in Dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,506,697,000 | 1,620,653,172,000 | 1,620,653,172,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700)
2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
>>> d.map(lambda ex: ex)
0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars
k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))
0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map
new_fingerprint=new_fingerprint,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper
out = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single
for i, example in enumerate(pbar):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__
format_kwargs=format_kwargs,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table
pa_subtable = _query_table(table, key)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table
return table.fast_slice(key % table.num_rows, 1)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice
i = _interpolation_search(self._offsets, offset)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search
raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
IndexError: Invalid query '290162' for size 74004228.
```
Tested on Windows, can run on Linux if needed.
EDIT:
It seems like for this to happen, the default NumPy dtype has to be np.int32. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2335/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2334/comments | https://api.github.com/repos/huggingface/datasets/issues/2334/events | https://github.com/huggingface/datasets/pull/2334 | 879,810,107 | MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw | 2,334 | Updating the DART file checksums in GEM | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@sebastianGehrmann "
] | 1,620,424,424,000 | 1,620,425,890,000 | 1,620,425,890,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2334",
"html_url": "https://github.com/huggingface/datasets/pull/2334",
"diff_url": "https://github.com/huggingface/datasets/pull/2334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2334.patch",
"merged_at": 1620425890000
} | The DART files were just updated on the source GitHub
https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2334/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2333/comments | https://api.github.com/repos/huggingface/datasets/issues/2333/events | https://github.com/huggingface/datasets/pull/2333 | 879,214,067 | MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy | 2,333 | Fix duplicate keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"- @jplu "
] | 1,620,401,288,000 | 1,620,510,451,000 | 1,620,403,028,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"merged_at": 1620403028000
} | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2333/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2332/comments | https://api.github.com/repos/huggingface/datasets/issues/2332/events | https://github.com/huggingface/datasets/pull/2332 | 879,041,608 | MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4 | 2,332 | Add note about indices mapping in save_to_disk docstring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,395,382,000 | 1,620,408,048,000 | 1,620,408,048,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2332",
"html_url": "https://github.com/huggingface/datasets/pull/2332",
"diff_url": "https://github.com/huggingface/datasets/pull/2332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2332.patch",
"merged_at": 1620408048000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2332/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | {
"login": "ktangri",
"id": 22266659,
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktangri",
"html_url": "https://github.com/ktangri",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktangri/subscriptions",
"organizations_url": "https://api.github.com/users/ktangri/orgs",
"repos_url": "https://api.github.com/users/ktangri/repos",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktangri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,395,039,000 | 1,620,395,039,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2330/comments | https://api.github.com/repos/huggingface/datasets/issues/2330/events | https://github.com/huggingface/datasets/issues/2330 | 878,490,927 | MDU6SXNzdWU4Nzg0OTA5Mjc= | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?",
"I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset."
] | 1,620,366,774,000 | 1,622,041,161,000 | 1,622,041,161,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2330/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2329/comments | https://api.github.com/repos/huggingface/datasets/issues/2329/events | https://github.com/huggingface/datasets/pull/2329 | 877,924,198 | MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0 | 2,329 | Add cache dir for in-memory datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n",
"Good job! Looking forward to this new feature! 🥂",
"@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ",
"@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` 😃) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.",
"Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they don’t have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !",
"Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.",
"Superseded by #2460 (to close issue #2458)."
] | 1,620,329,732,000 | 1,623,181,608,000 | 1,623,179,206,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"merged_at": null
} | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2329/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2328/comments | https://api.github.com/repos/huggingface/datasets/issues/2328/events | https://github.com/huggingface/datasets/pull/2328 | 877,673,896 | MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2 | 2,328 | Add Matthews/Pearson/Spearman correlation metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,317,367,000 | 1,620,320,290,000 | 1,620,320,290,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2328",
"html_url": "https://github.com/huggingface/datasets/pull/2328",
"diff_url": "https://github.com/huggingface/datasets/pull/2328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2328.patch",
"merged_at": 1620320290000
} | Added three metrics:
- The Matthews correlation coefficient (from sklearn)
- The Pearson correlation coefficient (from scipy)
- The Spearman correlation coefficient (from scipy)
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2328/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2327/comments | https://api.github.com/repos/huggingface/datasets/issues/2327/events | https://github.com/huggingface/datasets/issues/2327 | 877,565,831 | MDU6SXNzdWU4Nzc1NjU4MzE= | 2,327 | A syntax error in example | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"cc @beurkinger but I think this has been fixed internally and will soon be updated right ?",
"This issue has been fixed."
] | 1,620,311,684,000 | 1,621,479,859,000 | 1,621,479,859,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | 
Sorry to report with an image, I can't find the template source code of this snippet. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2327/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2326/comments | https://api.github.com/repos/huggingface/datasets/issues/2326/events | https://github.com/huggingface/datasets/pull/2326 | 876,829,254 | MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,248,318,000 | 1,620,376,870,000 | 1,620,376,870,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"merged_at": 1620376870000
} | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2326/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2325/comments | https://api.github.com/repos/huggingface/datasets/issues/2325/events | https://github.com/huggingface/datasets/pull/2325 | 876,653,121 | MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx | 2,325 | Added the HLGD dataset | {
"login": "tingofurro",
"id": 2609265,
"node_id": "MDQ6VXNlcjI2MDkyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tingofurro",
"html_url": "https://github.com/tingofurro",
"followers_url": "https://api.github.com/users/tingofurro/followers",
"following_url": "https://api.github.com/users/tingofurro/following{/other_user}",
"gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions",
"organizations_url": "https://api.github.com/users/tingofurro/orgs",
"repos_url": "https://api.github.com/users/tingofurro/repos",
"events_url": "https://api.github.com/users/tingofurro/events{/privacy}",
"received_events_url": "https://api.github.com/users/tingofurro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Is there anything else needed from my end?",
"Thanks Bhavitvya and Quentin, this was very streamlined!"
] | 1,620,233,609,000 | 1,620,831,313,000 | 1,620,828,998,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2325",
"html_url": "https://github.com/huggingface/datasets/pull/2325",
"diff_url": "https://github.com/huggingface/datasets/pull/2325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2325.patch",
"merged_at": 1620828998000
} | Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task
Dataset Link: https://github.com/tingofurro/headline_grouping
Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2325/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2324/comments | https://api.github.com/repos/huggingface/datasets/issues/2324/events | https://github.com/huggingface/datasets/pull/2324 | 876,602,064 | MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz | 2,324 | Create Audio feature | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [
"For optimal storage, it would be better to:\r\n- store only the audio file path in the cache Arrow file\r\n- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function)",
"Thanks a lot @lhoestq for your helpful insights! 🤗 ",
"Just one step before having a first running example to benchmark.\r\n\r\nDecision to make: how to call the function `dataset.features.decode_example`:\r\n- The usual approach until now in speech applications: call it in a subsequent `.map` function\r\n - Pros: multiprocessing can be used out of the box\r\n - Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage\r\n- Approach suggested by @lhoestq (see above: https://github.com/huggingface/datasets/pull/2324#discussion_r660758683): doing it in formatting\r\n - Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset\r\n - Cons: it is not cached; need to implement multiprocessing for this case\r\n- Other pros/cons for the previous options?\r\n- Other options?\r\n\r\ncc: @lhoestq @patrickvonplaten @anton-l ",
"@albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:\n\n- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.\n- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well.",
"Users can do the conversion from mp3 to wav by themselves if they want to using `map`.\r\n\r\nIMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)\r\n\r\nDecompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.\r\n\r\nLet me know what you think about this",
"@albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it. \r\n\r\nThe cons:\r\n- \"the operation won't be cached\" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the \"full\" feature extraction she/he will make use of `.map(...)` anyways which means that the result will be cached. \r\n- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.\r\n\r\nThe advantages mostly solve the main poinpoints being:\r\n- exploding disk space\r\n- bad user experience since the audio is not loaded on the go\r\n\r\n=> So I'm very much in favor of the \"direct-access\" feature",
"Update: I've retaken this issue.\r\n\r\nIf the decoding logic is implemented when \"examples are accessed\", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus \"accessing them\", before trying to apply the map function)...\r\n\r\nI'm thinking on some other approach...",
"I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed.",
"What about creating a new specific formatting, just for decoding? This would be only active within a context manager.",
"Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.\r\n\r\n@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks.",
"Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week.",
"Sure it looks nice this way :) Feel free to continue !",
"As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n\r\nOne proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n\r\nI suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n\r\nAbove (https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n\r\nNow, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n```python\r\ndef change_dir(example):\r\n example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n\r\n\r\nwith ds.formatted_as(\"no_decoding\"):\r\n print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n ds.map(change_dir)\r\n print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n\r\nprint(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n```\r\n\r\nPlease, just tell me what you think.\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"> As discussed, we should pay attention when applying `map` to a dataset with `Audio` feature, in order to avoid decoding the audio data twice.\r\n> \r\n> One proposed solution is to pass `input_columns` to `map`. Just, note that the field containing the Audio feature should not be passed in `input_columns` (not possible, for example, to map the audio file path to a new directory).\r\n> \r\n> I suggest again (3rd time, sorry, lol) using a formatting context manager (as we already use for PyTorch/TensorFlow: https://huggingface.co/docs/datasets/torch_tensorflow.html).\r\n> \r\n> Above ([#2324 (comment)](https://github.com/huggingface/datasets/pull/2324#issuecomment-915244003)), I suggested to define a formatting just for decoding: the decoding of the audio data is only performed if this specific formatting is set (`ds.set_format(\"decoding\")`) or within a context manager (`with ds.formatted_as(\"decoding\"): ...`)\r\n> \r\n> Now, I would like also to suggest an alternative formatting for **non-decoding** (if decoding is the default behavior), for a use case like this:\r\n> \r\n> ```python\r\n> def change_dir(example):\r\n> example[\"audio\"] = \"dir/\" + example[\"audio\"]\r\n> \r\n> \r\n> with ds.formatted_as(\"no_decoding\"):\r\n> print(ds[0]) # {\"audio\": \"path/to/file.wav\"}\r\n> ds.map(change_dir)\r\n> print(ds[0]) # {\"audio\": \"dir/path/to/file.wav\"}\r\n> \r\n> print(ds[0]) # {\"audio\": {\"path\": \"dir/path/to/file.wav\", \"array\": np.array([1., 2., 3...]), \"sampling_rate\": 44100}}\r\n> ```\r\n> \r\n> Please, just tell me what you think.\r\n> CC: @lhoestq @patrickvonplaten @anton-l\r\n\r\nI'm fine with a context manager! There is no way to **not** decode the audio if its key is not accessed no?\r\n\r\nE.g.\r\n\r\n```python\r\ndef load(batch):\r\n batch[\"speech_array\"] = torchaudio.load(batch[\"file\"])\r\n return batch\r\n\r\nds.map(load)\r\n```\r\n\r\ndoes *e.g.* not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no? \r\n\r\n=> I'm happy with both the context manager and using `input_colmuns`. Both of those solutions are equally good to me if a \"not-access-key-no-decoding\" solution is just not feasible. I let you guys decide :-)",
"> \r\n> There is no way to **not** decode the audio if its key is not accessed no?\r\n> \r\n> E.g...\r\n> \r\n> does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n\r\n@patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.",
"> > There is no way to **not** decode the audio if its key is not accessed no?\r\n> > E.g...\r\n> > does _e.g._ not access the \"audio\" key `batch[\"audio\"}` but there is no way to not decode it without major changes no?\r\n> \r\n> @patrickvonplaten I think therefore we should rethink the implementation of the Audio feature: its goal is to enrich/simplify the user experience when working with audio files. If on the other hand, you see that the current implementation may be problematic/unsatisfying/not-optimal, then we miss the point of creating this feature. This feature should be useful to users, not inconvenient.\r\n\r\nThanks a lot for the message! I'm discussing a bit with @anton-l at the moment - will share our results as soon as possible",
"Current implementation: see use cases in file https://github.com/huggingface/datasets/blob/0f80e6eaa6f596ff6287eb33587e2d9c69af0e73/tests/features/test_audio.py\r\n\r\nAutomatic decoding:\r\n- when directly accessing an example or a batch\r\n ```python\r\n dset[0]\r\n dset[:2]\r\n ```\r\n- during map, only if audio field is accessed:\r\n ```python\r\n def process_audio_sampling_rate(example):\r\n example[\"double_sampling_rate\"] = 2 * example[\"audio\"][\"sampling_rate\"]\r\n return example\r\n\r\n decoded_dset = dset.map(process_audio_sampling_rate)\r\n ```\r\n\r\nNo automatic decoding:\r\n- during map if audio field is not accessed:\r\n ```python\r\n def process_text(example):\r\n example[\"text\"] = example[\"text\"] + \" World!\"\r\n return example\r\n\r\n decoded_dset = dset.map(process_text)\r\n ```\r\n\r\nThe types of example and batch are kept as usual, `dict[str, Any]` and `dict[str, list[Any]]` respectively.\r\n\r\nCC: @patrickvonplaten @anton-l @lhoestq ",
"That's awesome! Thanks so much for your work on this @albertvillanova!",
"Oh and maybe have a test to make sure that casting the Audio feature to change the sampling rate works as expected ?",
"@lhoestq the test for the resampling is already in place in `test_audio_resampling`: \r\nhttps://github.com/huggingface/datasets/pull/2324/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R48-R56",
"Please note that we should agree in the API: see 53d6d73\r\n\r\nThis is just a proposal implementation:\r\n- Create a new method named `cast_column`, which performs a shallow kind of cast (without using `map()` or caching)\r\n\r\nWe should agree in the name, because as it is, it might be confused with `cast` (and users might think `cast_column` caches the result as `cast`)\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"IMO cast and cast_column should have the exact same behavior, to make the experience simple for the user (no distinction between shallow or deep cast).\r\n\r\nMaybe we should change `cast` to use `cast_column` on every column and make `cast_column` use `map` if and only if it's necessary. For Audio for example `map` is not needed.\r\n\r\nWe just need to do some tests to know which casts always need map and which ones don't. This implies either looking at the PyArrow source code (the documentation doesn't mention all these details) or playing with PyArrow to figure it out.\r\n\r\nI guess for now we can just have the simplest `cast_column` which always uses map unless it's an Audio feature type.\r\n\r\nLet me know what you think !",
"@lhoestq I totally agree: `cast` and `cast_column` should be analog to each other.\r\n\r\nFor the implementation, let me try something simpler than the one suggested by you...",
"@lhoestq what do you think of an approach like this 633ef09?\r\n\r\nIf it's OK, then we should implement passing parameters to `cast`.",
"@lhoestq maybe for now we could make a simple implementation and finish this PR. Then we could make a follow-up PR to deal specifically with the optimal implementation of `cast_column` and `cast`, as this issue is not specific to the Audio feature.",
"> @lhoestq what do you think of an approach like this 633ef09?\r\n\r\nYea that's good enough for the time being :)\r\n\r\nI think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n\r\n(note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).",
"> \r\n> I think the last thing we need to do is make sure that `cast_column` changes the fingerprint of the dataset. Feel free to use the `fingerprint_transform` decorator, as for `remove_columns`.\r\n> \r\n> (note that cast currently doesn't use the decorator since it's based on `map` that already updates the fingerprint).\r\n\r\n@lhoestq note that `cast_column` may call `cast` in some cases, and the decorator would not be necessary for these cases...\r\n- I did it by setting `inplace=False`, and updating fingerprint explicitly only when `cast` is not called.",
"I think current state of this PR could be included in our next release, as experimental feature, for stress testing it and try to find all potential issues. What do you think?\r\n\r\nCC: @lhoestq @patrickvonplaten @anton-l ",
"Looks great! Ready to try it out on the transformers examples after the release :)",
"Think we are good to merge here no? :-)"
] | 1,620,230,122,000 | 1,634,120,793,000 | 1,634,120,793,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2324",
"html_url": "https://github.com/huggingface/datasets/pull/2324",
"diff_url": "https://github.com/huggingface/datasets/pull/2324.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2324.patch",
"merged_at": 1634120793000
} | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
## Requirements Specification
- Access example with audio loading and resampling:
```python
ds[0]["audio"]
```
- Map with audio loading & resampling:
```python
def preprocess(batch):
batch["input_values"] = processor(batch["audio"]).input_values
return batch
ds = ds.map(preprocess)
```
- Map without audio loading and resampling:
```python
def preprocess(batch):
batch["labels"] = processor(batch["target_text"]).input_values
return batch
ds = ds.map(preprocess)
```
- Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate:
```python
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2324/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2323/comments | https://api.github.com/repos/huggingface/datasets/issues/2323/events | https://github.com/huggingface/datasets/issues/2323 | 876,438,507 | MDU6SXNzdWU4NzY0Mzg1MDc= | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | {
"login": "ekeleshian",
"id": 33647474,
"node_id": "MDQ6VXNlcjMzNjQ3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekeleshian",
"html_url": "https://github.com/ekeleshian",
"followers_url": "https://api.github.com/users/ekeleshian/followers",
"following_url": "https://api.github.com/users/ekeleshian/following{/other_user}",
"gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions",
"organizations_url": "https://api.github.com/users/ekeleshian/orgs",
"repos_url": "https://api.github.com/users/ekeleshian/repos",
"events_url": "https://api.github.com/users/ekeleshian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekeleshian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Upgrading datasets to version 1.6 fixes the issue",
"This bug was fixed in #1995. Upgrading the `datasets` should work! ",
"Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists."
] | 1,620,220,488,000 | 1,620,383,550,000 | 1,620,383,550,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times.
I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit = load_dataset("timit_asr")
print(timit['train']['text'])
print(timit['test']['text'])
```
## Expected Result
Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)
<img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png">
## Actual results
Rows of repeated text.
<img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png">
## Versions
- Datasets: 1.3.0
- Python: 3.9.1
- Platform: macOS-11.2.1-x86_64-i386-64bit}
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2323/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2322/comments | https://api.github.com/repos/huggingface/datasets/issues/2322/events | https://github.com/huggingface/datasets/issues/2322 | 876,383,853 | MDU6SXNzdWU4NzYzODM4NTM= | 2,322 | Calls to map are not cached. | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"repos_url": "https://api.github.com/users/villmow/repos",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] \r\nNo config specified, defaulting to: sst/default\r\nDownloading and preparing dataset sst/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0/5 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/5 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\nexecuted [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]\r\nexecuted [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]\r\nexecuted [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]\r\nexecuted [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]\r\nexecuted [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]\r\nexecuted [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]\r\n#0: 100%|██████████| 5/5 [00:00<00:00, 94.83ba/s]\r\nexecuted [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]\r\n#1: 100%|██████████| 5/5 [00:00<00:00, 92.75ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]\r\n#0: 100%|██████████| 1/1 [00:00<00:00, 118.81ba/s]\r\n#1: 100%|██████████| 1/1 [00:00<00:00, 123.06ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/2 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/2 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\n#0: 100%|██████████| 2/2 [00:00<00:00, 119.42ba/s]\r\nexecuted [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]\r\n#1: 100%|██████████| 2/2 [00:00<00:00, 123.33ba/s]\r\n\r\n\r\n\r\n ############################## \r\n\r\n\r\n\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-6079777aa097c8f8.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-2dc05c46f68eda6e.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-1ca347e7430b98f1.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-c0f1a73ce3ba40cd.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-832a1407bf1ac5b7.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-036316a259b773c4.arrow\r\n- Datasets: 1.5.0\r\n- Python: 3.8.3 (default, May 19 2020, 18:47:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10\r\n```",
"Hi,\r\n\r\nset `keep_in_memory` to False when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):\r\n\r\nhttps://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e767ad406f9da7610df2/src/datasets/arrow_dataset.py#L1718\r\n\r\n@albertvillanova It seems like this behavior was overlooked in #2182.\r\n\r\n",
"Hi @villmow, thanks for reporting. \r\n\r\nAs @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed.",
"Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.\r\nOn the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.\r\n\r\nBecause of that, currently in-memory datasets simply don't use caching.\r\n\r\nMaybe a Dataset object could have a `cache_dir` that is set to the directory where the arrow files are created during `load_dataset` ?",
"Fixed once reverted the default in-memory feature:\r\nClosed by #2460 (to close issue #2458).",
"Please @villmow, feel free to update to `Datasets` latest version (1.8)."
] | 1,620,216,687,000 | 1,623,179,402,000 | 1,623,179,301,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])
return samples
# first call
x = sst.map(foo, batched=True, with_indices=True, num_proc=2)
print('\n'*3, "#" * 30, '\n'*3)
# second call
y = sst.map(foo, batched=True, with_indices=True, num_proc=2)
# print version
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
## Actual results
This code prints the following output for me:
```bash
No config specified, defaulting to: sst/default
Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
#0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s]
##############################
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
#0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s]
- Datasets: 1.6.1
- Python: 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
```
## Expected results
Caching should work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2322/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2321/comments | https://api.github.com/repos/huggingface/datasets/issues/2321/events | https://github.com/huggingface/datasets/pull/2321 | 876,304,364 | MDExOlB1bGxSZXF1ZXN0NjMwNDc3NDUy | 2,321 | Set encoding in OSCAR dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,210,423,000 | 1,620,211,855,000 | 1,620,211,855,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2321",
"html_url": "https://github.com/huggingface/datasets/pull/2321",
"diff_url": "https://github.com/huggingface/datasets/pull/2321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2321.patch",
"merged_at": 1620211854000
} | Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms.
Fix #2319. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2321/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2320/comments | https://api.github.com/repos/huggingface/datasets/issues/2320/events | https://github.com/huggingface/datasets/pull/2320 | 876,257,026 | MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5 | 2,320 | Set default name in init_dynamic_modules | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,207,003,000 | 1,620,287,874,000 | 1,620,287,874,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2320",
"html_url": "https://github.com/huggingface/datasets/pull/2320",
"diff_url": "https://github.com/huggingface/datasets/pull/2320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2320.patch",
"merged_at": 1620287874000
} | Set default value for the name of dynamic modules.
Close #2318. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2320/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.",
"Awesome, thank you. 😃 ",
"@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`."
] | 1,620,206,572,000 | 1,620,212,251,000 | 1,620,211,855,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2318/comments | https://api.github.com/repos/huggingface/datasets/issues/2318/events | https://github.com/huggingface/datasets/issues/2318 | 876,212,460 | MDU6SXNzdWU4NzYyMTI0NjA= | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | {
"login": "richardliaw",
"id": 4529381,
"node_id": "MDQ6VXNlcjQ1MjkzODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardliaw",
"html_url": "https://github.com/richardliaw",
"followers_url": "https://api.github.com/users/richardliaw/followers",
"following_url": "https://api.github.com/users/richardliaw/following{/other_user}",
"gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions",
"organizations_url": "https://api.github.com/users/richardliaw/orgs",
"repos_url": "https://api.github.com/users/richardliaw/repos",
"events_url": "https://api.github.com/users/richardliaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardliaw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```",
"Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!",
"I like the idea as well ! thanks @albertvillanova ",
"Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.",
"awesome work @albertvillanova !"
] | 1,620,204,048,000 | 1,620,290,745,000 | 1,620,287,874,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import.
I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
`datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case.
By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently):
https://github.com/huggingface/blog/issues/106
https://github.com/huggingface/transformers/issues/11565
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2318/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2317/comments | https://api.github.com/repos/huggingface/datasets/issues/2317/events | https://github.com/huggingface/datasets/pull/2317 | 875,767,318 | MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4 | 2,317 | Fix incorrect version specification for the pyarrow package | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,156,620,000 | 1,620,209,356,000 | 1,620,206,518,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2317",
"html_url": "https://github.com/huggingface/datasets/pull/2317",
"diff_url": "https://github.com/huggingface/datasets/pull/2317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2317.patch",
"merged_at": 1620206518000
} | This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .
Simply, I put a comma between the version bounds.
Fix #2316. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2317/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2317/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2316/comments | https://api.github.com/repos/huggingface/datasets/issues/2316/events | https://github.com/huggingface/datasets/issues/2316 | 875,756,353 | MDU6SXNzdWU4NzU3NTYzNTM= | 2,316 | Incorrect version specification for pyarrow | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Fixed by #2317."
] | 1,620,155,711,000 | 1,620,209,403,000 | 1,620,209,403,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install "pyarrow>=1.0.0<4.0.0"
```
## Expected results
It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive).
## Actual results
pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0.
This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well:
```bash
conda env export
InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s)
```
## Fix suggestion
Put a comma between the version limits which means replacing the line in setup.py file with the following:
```python
"pyarrow>=1.0.0,<4.0.0",
```
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.2
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2316/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2315/comments | https://api.github.com/repos/huggingface/datasets/issues/2315/events | https://github.com/huggingface/datasets/pull/2315 | 875,742,200 | MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy | 2,315 | Datasets cli improvements | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. "
] | 1,620,154,511,000 | 1,620,664,611,000 | 1,620,664,610,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2315",
"html_url": "https://github.com/huggingface/datasets/pull/2315",
"diff_url": "https://github.com/huggingface/datasets/pull/2315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2315.patch",
"merged_at": 1620664610000
} | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2315/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2314/comments | https://api.github.com/repos/huggingface/datasets/issues/2314/events | https://github.com/huggingface/datasets/pull/2314 | 875,729,271 | MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4 | 2,314 | Minor refactor prepare_module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`.",
"closing in favor of #2986 "
] | 1,620,153,446,000 | 1,634,116,054,000 | 1,634,116,054,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2314",
"html_url": "https://github.com/huggingface/datasets/pull/2314",
"diff_url": "https://github.com/huggingface/datasets/pull/2314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2314.patch",
"merged_at": null
} | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2314/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2313/comments | https://api.github.com/repos/huggingface/datasets/issues/2313/events | https://github.com/huggingface/datasets/pull/2313 | 875,475,367 | MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4 | 2,313 | Remove unused head_hf_s3 function | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,135,726,000 | 1,620,379,902,000 | 1,620,379,902,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"merged_at": null
} | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2313/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2312/comments | https://api.github.com/repos/huggingface/datasets/issues/2312/events | https://github.com/huggingface/datasets/pull/2312 | 875,435,726 | MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz | 2,312 | Add rename_columnS method | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Merging then 😄 "
] | 1,620,133,073,000 | 1,620,135,793,000 | 1,620,135,792,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2312",
"html_url": "https://github.com/huggingface/datasets/pull/2312",
"diff_url": "https://github.com/huggingface/datasets/pull/2312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2312.patch",
"merged_at": 1620135792000
} | Cherry-picked from #2255 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2312/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2311/comments | https://api.github.com/repos/huggingface/datasets/issues/2311/events | https://github.com/huggingface/datasets/pull/2311 | 875,262,208 | MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!/bin/bash -eo pipefail\r\n./scripts/datasets_metadata_validator.py\r\nWARNING:root:❌ Failed to validate 'datasets/openslr/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:❌ Failed on 1 files.\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1 \r\n```\r\nCould you have a look please? Thanks.",
"Hi ! The error is unrelated to your PR and has been fixed on master\r\nNext time feel free to merge master into your branch to fix the CI error ;)"
] | 1,620,119,283,000 | 1,620,381,055,000 | 1,620,381,055,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2311",
"html_url": "https://github.com/huggingface/datasets/pull/2311",
"diff_url": "https://github.com/huggingface/datasets/pull/2311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2311.patch",
"merged_at": 1620381055000
} | Add large speech datasets for Sinhala, Bengali and Nepali. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2311/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2310/comments | https://api.github.com/repos/huggingface/datasets/issues/2310/events | https://github.com/huggingface/datasets/pull/2310 | 875,096,051 | MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5 | 2,310 | Update README.md | {
"login": "cryoff",
"id": 15029054,
"node_id": "MDQ6VXNlcjE1MDI5MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cryoff",
"html_url": "https://github.com/cryoff",
"followers_url": "https://api.github.com/users/cryoff/followers",
"following_url": "https://api.github.com/users/cryoff/following{/other_user}",
"gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cryoff/subscriptions",
"organizations_url": "https://api.github.com/users/cryoff/orgs",
"repos_url": "https://api.github.com/users/cryoff/repos",
"events_url": "https://api.github.com/users/cryoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/cryoff/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}\r\n```"
] | 1,620,103,081,000 | 1,620,110,159,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2310",
"html_url": "https://github.com/huggingface/datasets/pull/2310",
"diff_url": "https://github.com/huggingface/datasets/pull/2310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2310.patch",
"merged_at": null
} | Provides description of data instances and dataset features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2310/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2309/comments | https://api.github.com/repos/huggingface/datasets/issues/2309/events | https://github.com/huggingface/datasets/pull/2309 | 874,644,990 | MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx | 2,309 | Fix conda release | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,620,053,579,000 | 1,620,057,677,000 | 1,620,057,677,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2309",
"html_url": "https://github.com/huggingface/datasets/pull/2309",
"diff_url": "https://github.com/huggingface/datasets/pull/2309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2309.patch",
"merged_at": 1620057677000
} | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't supported
- sync the evrsion requirement of `huggingface_hub`
With these changes I'm working on uploading all missing versions until 1.6.2 to conda
EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2309/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2308/comments | https://api.github.com/repos/huggingface/datasets/issues/2308/events | https://github.com/huggingface/datasets/issues/2308 | 874,559,846 | MDU6SXNzdWU4NzQ1NTk4NDY= | 2,308 | Add COCO evaluation metrics | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @NielsRogge, \r\nI'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?\r\n",
"Great!\r\n\r\nHere's a notebook that illustrates how I'm using `CocoEvaluator`: https://drive.google.com/file/d/1VV92IlaUiuPOORXULIuAdtNbBWCTCnaj/view?usp=sharing\r\n\r\nThe evaluation is near the end of the notebook.\r\n\r\n",
"I went through the code you've [mentioned](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py) and I think there are 2 options on how we can go ahead:\r\n\r\n1) Implement how DETR people have done this (they're relying very heavily on the official implementation and they're focussing on torch dataset here. I feel ours should be something generic instead of pytorch specific.\r\n2) Do this [implementation](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocoEvalDemo.ipynb) where user can convert its output and ground truth annotation to pre-defined format and then feed it into our function to calculate metrics (looks very similar to you wanted above)\r\n\r\nIn my opinion, 2nd option looks very clean but I'm still figuring out how's it transforming the box co-ordinates of `coco_gt` which you've passed to `CocoEvaluator` (ground truth for evaluation). Since your model output was already converted to COCO api, I faced little problems there.",
"Ok, thanks for the update.\r\n\r\nIndeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation.\r\n\r\n[This file](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py) is probably want we need to implement.\r\n\r\n"
] | 1,620,047,285,000 | 1,622,790,687,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py#L22) and [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/panoptic_eval.py#L13) respectively).
Running these in a notebook gives you nice summaries like this:

It would be great if we could import these metrics from the Datasets library, something like this:
```
import datasets
metric = datasets.load_metric('coco')
for model_input, gold_references in evaluation_dataset:
model_predictions = model(model_inputs)
metric.add_batch(predictions=model_predictions, references=gold_references)
final_score = metric.compute()
```
I think this would be great for object detection and semantic/panoptic segmentation in general, not just for DETR. Reproducing results of object detection papers would be way easier.
However, object detection and panoptic segmentation evaluation is a bit more complex than accuracy (it's more like a summary of metrics at different thresholds rather than a single one). I'm not sure how to proceed here, but happy to help making this possible.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2308/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2308/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2302/comments | https://api.github.com/repos/huggingface/datasets/issues/2302/events | https://github.com/huggingface/datasets/pull/2302 | 873,961,435 | MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3 | 2,302 | Add SubjQA dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and 🤞 ?",
"Hi @lewtun, thanks for adding this dataset!\r\n\r\nIf the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\nHere's a link to the [relevant section of the guide](https://github.com/huggingface/datasets/blob/master/templates/README_guide.md#dataset-creation), let me know if you have any questions!",
"> If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\ngreat idea @yjernite! i've added some extra information / moved things as you suggest and will wrap up the rest tomorrow :)",
"hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review!"
] | 1,619,967,080,000 | 1,620,638,479,000 | 1,620,638,479,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2302",
"html_url": "https://github.com/huggingface/datasets/pull/2302",
"diff_url": "https://github.com/huggingface/datasets/pull/2302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2302.patch",
"merged_at": 1620638479000
} | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2302/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2301/comments | https://api.github.com/repos/huggingface/datasets/issues/2301/events | https://github.com/huggingface/datasets/issues/2301 | 873,941,266 | MDU6SXNzdWU4NzM5NDEyNjY= | 2,301 | Unable to setup dev env on Windows | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.microsoft.com/visual-cpp-build-tools/",
"Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot."
] | 1,619,961,642,000 | 1,620,055,081,000 | 1,620,055,054,000 | CONTRIBUTOR | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5)
Collecting pyarrow>=0.17.1
Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB)
Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1)
Collecting pandas
Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB)
Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0)
Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2)
Collecting multiprocess
Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB)
Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0)
Collecting huggingface_hub<0.1.0
Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1)
Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3)
Collecting pytest-xdist
Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB)
Collecting apache-beam>=2.24.0
Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB)
Collecting elasticsearch
Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB)
Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43)
Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43)
Collecting moto[s3]==1.3.16
Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB)
Collecting rarfile>=4.0
Using cached rarfile-4.0-py3-none-any.whl (28 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB)
Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1)
Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1)
Collecting bs4
Using cached bs4-0.0.1-py3-none-any.whl
Collecting conllu
Using cached conllu-4.4-py2.py3-none-any.whl (15 kB)
Collecting langdetect
Using cached langdetect-1.0.8-py3-none-any.whl
Collecting lxml
Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)
Collecting mwparserfromhell
Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB)
Collecting nltk
Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB)
Collecting openpyxl
Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB)
Collecting py7zr
Using cached py7zr-0.15.2-py3-none-any.whl (66 kB)
Collecting tldextract
Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB)
Collecting zstandard
Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB)
Collecting bert_score>=0.3.6
Using cached bert_score-0.3.9-py3-none-any.whl (59 kB)
Collecting rouge_score
Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)
Collecting sacrebleu
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting sklearn
Using cached sklearn-0.0-py2.py3-none-any.whl
Collecting jiwer
Using cached jiwer-2.2.0-py3-none-any.whl (13 kB)
Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1)
Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2)
Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1)
Collecting black
Using cached black-21.4b2-py3-none-any.whl (130 kB)
Collecting isort
Using cached isort-5.8.0-py3-none-any.whl (103 kB)
Collecting flake8==3.7.9
Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1)
Collecting entrypoints<0.4.0,>=0.3.0
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting pyflakes<2.2.0,>=2.1.0
Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB)
Collecting pycodestyle<2.6.0,>=2.5.0
Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB)
Collecting mccabe<0.7.0,>=0.6.0
Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1)
Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3)
Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0)
Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7)
Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0)
Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1)
Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0)
Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10)
Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1)
Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3)
Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125)
Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3)
Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1)
Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0)
Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1)
Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0)
Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0)
Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0)
Collecting hdfs<3.0.0,>=2.1.0
Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB)
Collecting fastavro<2,>=0.21.4
Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB)
Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4)
Collecting pymongo<4.0.0,>=3.8.0
Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB)
Collecting crcmod<2.0,>=1.7
Using cached crcmod-1.7-py3-none-any.whl
Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1
Using cached avro_python3-1.9.2.1-py3-none-any.whl
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3)
Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2)
Collecting oauth2client<5,>=2.0.1
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting pydot<2,>=1.2.0
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8)
Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1)
Collecting matplotlib
Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB)
Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9)
Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32)
Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1)
Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0)
Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5)
Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20)
Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227)
Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0)
Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2)
Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12)
Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0)
Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2)
Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8)
Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0)
Collecting keras-preprocessing~=1.1.2
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0)
Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0)
Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2)
Collecting opt-einsum~=3.3.0
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting gast==0.3.3
Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Collecting google-pasta~=0.2
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0)
Collecting astunparse~=1.6.3
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting h5py~=2.10.0
Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0)
Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45)
Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9)
Collecting pathspec<1,>=0.8.1
Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB)
Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2)
Collecting appdirs
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting mypy-extensions>=0.4.3
Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1)
Collecting python-Levenshtein
Using cached python-Levenshtein-0.12.2.tar.gz (50 kB)
Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1)
Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1)
Collecting multiprocess
Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB)
Using cached multiprocess-0.70.10.zip (2.4 MB)
Using cached multiprocess-0.70.9-py3-none-any.whl
Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1)
Collecting et-xmlfile
Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4)
Collecting pyppmd<0.13.0,>=0.12.1
Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB)
Collecting pycryptodome>=3.6.6
Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB)
Collecting bcj-cffi<0.6.0,>=0.5.1
Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB)
Collecting multivolumefile<0.3.0,>=0.2.0
Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB)
Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0)
Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1)
Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0)
Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4)
Collecting pytest-forked
Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)
Collecting execnet>=1.1
Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5)
Collecting portalocker==2.0.0
Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0)
Building wheels for collected packages: python-Levenshtein
Building wheel for python-Levenshtein (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for python-Levenshtein
Running setup.py clean for python-Levenshtein
Failed to build python-Levenshtein
Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam
Running setup.py install for python-Levenshtein ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output.
```
Here are conda and python versions:
```bat
(env) C:\testing\datasets>conda --version
conda 4.9.2
(env) C:\testing\datasets>python --version
Python 3.7.10
```
Please help me out. Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2301/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2300/comments | https://api.github.com/repos/huggingface/datasets/issues/2300/events | https://github.com/huggingface/datasets/issues/2300 | 873,928,169 | MDU6SXNzdWU4NzM5MjgxNjk= | 2,300 | Add VoxPopuli | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternative could be to provide the segments start and end times as a Sequence and then it's up to the user to perform the segmentation on-the-fly if they wish?",
"Hey @jfainberg,\r\n\r\nThis sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:\r\n\r\n```python\r\ndataset = load_dataset(\"voxpopuli\", \"french\")\r\n```\r\n\r\n=> so as a start I think your option 2 is the way to go!"
] | 1,619,957,860,000 | 1,645,038,817,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2300/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2299/comments | https://api.github.com/repos/huggingface/datasets/issues/2299/events | https://github.com/huggingface/datasets/issues/2299 | 873,914,717 | MDU6SXNzdWU4NzM5MTQ3MTc= | 2,299 | My iPhone | {
"login": "Jasonbuchanan1983",
"id": 82856229,
"node_id": "MDQ6VXNlcjgyODU2MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jasonbuchanan1983",
"html_url": "https://github.com/Jasonbuchanan1983",
"followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers",
"following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_user}",
"gists_url": "https://api.github.com/users/Jasonbuchanan1983/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jasonbuchanan1983/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jasonbuchanan1983/subscriptions",
"organizations_url": "https://api.github.com/users/Jasonbuchanan1983/orgs",
"repos_url": "https://api.github.com/users/Jasonbuchanan1983/repos",
"events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jasonbuchanan1983/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,953,871,000 | 1,627,032,256,000 | 1,620,029,858,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2299/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2298/comments | https://api.github.com/repos/huggingface/datasets/issues/2298/events | https://github.com/huggingface/datasets/pull/2298 | 873,771,942 | MDExOlB1bGxSZXF1ZXN0NjI4NDk2NjM2 | 2,298 | Mapping in the distributed setting | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,904,185,000 | 1,620,050,093,000 | 1,620,050,093,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2298",
"html_url": "https://github.com/huggingface/datasets/pull/2298",
"diff_url": "https://github.com/huggingface/datasets/pull/2298.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2298.patch",
"merged_at": 1620050093000
} | The barrier trick for distributed mapping as discussed on Thursday with @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2298/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2296/comments | https://api.github.com/repos/huggingface/datasets/issues/2296/events | https://github.com/huggingface/datasets/issues/2296 | 872,974,907 | MDU6SXNzdWU4NzI5NzQ5MDc= | 2,296 | 1 | {
"login": "zinnyi",
"id": 82880142,
"node_id": "MDQ6VXNlcjgyODgwMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/82880142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zinnyi",
"html_url": "https://github.com/zinnyi",
"followers_url": "https://api.github.com/users/zinnyi/followers",
"following_url": "https://api.github.com/users/zinnyi/following{/other_user}",
"gists_url": "https://api.github.com/users/zinnyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zinnyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zinnyi/subscriptions",
"organizations_url": "https://api.github.com/users/zinnyi/orgs",
"repos_url": "https://api.github.com/users/zinnyi/repos",
"events_url": "https://api.github.com/users/zinnyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zinnyi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,805,229,000 | 1,620,029,851,000 | 1,620,029,851,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2296/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2295/comments | https://api.github.com/repos/huggingface/datasets/issues/2295/events | https://github.com/huggingface/datasets/pull/2295 | 872,902,867 | MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3 | 2,295 | Create ExtractManager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"id": 6836458,
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"title": "1.10",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 29,
"state": "closed",
"created_at": 1623178113000,
"updated_at": 1626881809000,
"due_on": 1628146800000,
"closed_at": 1626881809000
} | [
"Hi @lhoestq,\r\n\r\nOnce that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.\r\n\r\nThanks.",
"I think all is done @lhoestq ;)"
] | 1,619,802,814,000 | 1,626,099,123,000 | 1,625,731,909,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2295",
"html_url": "https://github.com/huggingface/datasets/pull/2295",
"diff_url": "https://github.com/huggingface/datasets/pull/2295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2295.patch",
"merged_at": 1625731909000
} | Perform refactoring to decouple extract functionality. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2295/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2295/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2294/comments | https://api.github.com/repos/huggingface/datasets/issues/2294/events | https://github.com/huggingface/datasets/issues/2294 | 872,136,075 | MDU6SXNzdWU4NzIxMzYwNzU= | 2,294 | Slow #0 when using map to tokenize. | {
"login": "VerdureChen",
"id": 31714566,
"node_id": "MDQ6VXNlcjMxNzE0NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31714566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VerdureChen",
"html_url": "https://github.com/VerdureChen",
"followers_url": "https://api.github.com/users/VerdureChen/followers",
"following_url": "https://api.github.com/users/VerdureChen/following{/other_user}",
"gists_url": "https://api.github.com/users/VerdureChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VerdureChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VerdureChen/subscriptions",
"organizations_url": "https://api.github.com/users/VerdureChen/orgs",
"repos_url": "https://api.github.com/users/VerdureChen/repos",
"events_url": "https://api.github.com/users/VerdureChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/VerdureChen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?\r\nThere are no difference between process 0 and the others except that it processes the first shard of the dataset.",
"Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:\r\n```if args.dataset_name1 is not None:\r\n dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split=\"train\")\r\n dataset1 = dataset1.remove_columns('title')\r\n if args.dataset_name2 is not None:\r\n dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split=\"train\")\r\n assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type)\r\n datasets12 = concatenate_datasets([dataset1, dataset2], split='train')\r\n```\r\nWhen I just use one datasets, e.g. wikipedia, the problem seems no longer exist:\r\n\r\n\r\nBookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. \r\n\r\nThe problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.\r\n\r\n",
"That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.\r\nAnother option is to concatenate, then shuffle, and then `map`."
] | 1,619,769,633,000 | 1,620,126,011,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
)` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others.
It looks like this:

It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2294/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2293/comments | https://api.github.com/repos/huggingface/datasets/issues/2293/events | https://github.com/huggingface/datasets/pull/2293 | 872,079,385 | MDExOlB1bGxSZXF1ZXN0NjI3MDQzNzQ3 | 2,293 | imdb dataset from Don't Stop Pretraining Paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,764,848,000 | 1,619,765,665,000 | 1,619,765,665,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2293",
"html_url": "https://github.com/huggingface/datasets/pull/2293",
"diff_url": "https://github.com/huggingface/datasets/pull/2293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2293.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2293/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2292/comments | https://api.github.com/repos/huggingface/datasets/issues/2292/events | https://github.com/huggingface/datasets/pull/2292 | 871,230,183 | MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy | 2,292 | Fixed typo seperate->separate | {
"login": "laksh9950",
"id": 32505743,
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laksh9950",
"html_url": "https://github.com/laksh9950",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,714,453,000 | 1,619,789,358,000 | 1,619,787,792,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"merged_at": 1619787792000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2292/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2291/comments | https://api.github.com/repos/huggingface/datasets/issues/2291/events | https://github.com/huggingface/datasets/pull/2291 | 871,216,757 | MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5 | 2,291 | Don't copy recordbatches in memory during a table deepcopy | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,713,565,000 | 1,619,714,075,000 | 1,619,714,074,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2291",
"html_url": "https://github.com/huggingface/datasets/pull/2291",
"diff_url": "https://github.com/huggingface/datasets/pull/2291.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2291.patch",
"merged_at": 1619714073000
} | Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied).
The issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA.
Thanks @samsontmr , @TaskManager91 and @mariosasko for the help
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2291/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2291/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2290/comments | https://api.github.com/repos/huggingface/datasets/issues/2290/events | https://github.com/huggingface/datasets/pull/2290 | 871,145,817 | MDExOlB1bGxSZXF1ZXN0NjI2MjEyNTIz | 2,290 | Bbaw egyptian | {
"login": "phiwi",
"id": 54144149,
"node_id": "MDQ6VXNlcjU0MTQ0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/54144149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phiwi",
"html_url": "https://github.com/phiwi",
"followers_url": "https://api.github.com/users/phiwi/followers",
"following_url": "https://api.github.com/users/phiwi/following{/other_user}",
"gists_url": "https://api.github.com/users/phiwi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phiwi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phiwi/subscriptions",
"organizations_url": "https://api.github.com/users/phiwi/orgs",
"repos_url": "https://api.github.com/users/phiwi/repos",
"events_url": "https://api.github.com/users/phiwi/events{/privacy}",
"received_events_url": "https://api.github.com/users/phiwi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @phiwi,\r\n\r\nThanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.\r\n\r\nCould you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:\r\n```\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"Thanks ! Can you check that you have `black==21.4b0` and run `make style` again ? This should fix the \"check_code_quality\" CI issue",
"Reformatted with black.",
"Hi @phiwi, there are still some minor problems in relation with the tags you used in the dataset card (README.md).\r\n\r\nHere you can find the output of the metadata validator:\r\n```\r\nWARNING:root:❌ Failed to validate 'datasets/bbaw_egyptian/README.md':\r\nCould not validate the metada, found the following errors:\r\n* field 'size_categories':\r\n\t['100K<n<1000K'] are not registered tags for 'size_categories', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/size_categories.json\r\n* field 'task_ids':\r\n\t['machine translation'] are not registered tags for 'task_ids', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/tasks.json\r\n* field 'languages':\r\n\t['eg'] are not registered tags for 'languages', reference at https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources/languages.json\r\n\r\n``` ",
"@albertvillanova corrected :-)",
"Thanks, @phiwi. Now all tests should pass green.\r\n\r\nHowever, I think there is still an issue with the language code:\r\n- the code for the Ancient Egyptian is not `ar-EG`\r\n- there is no ISO 639-1 code for the Ancient Egyptian\r\n- there is an ISO 639-2 code: `egy`; but this code will not pass the validation test because it is not in the list of valid codes\r\n\r\nI am not sure what to do in this case... Maybe @lhoestq has an idea? Maybe adding the code to the list? https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json",
"I have just checked that in the [list of valid codes](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/languages.json) there are already ISO 639-2 codes. Therefore, I would suggest you to add it to the list:\r\n```\r\n\"egy\": \"Egyptian (Ancient)\",\r\n```\r\nand change it in the dataset card.",
"Done.",
"Hope, everything is okay right now."
] | 1,619,710,078,000 | 1,620,321,925,000 | 1,620,321,925,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2290",
"html_url": "https://github.com/huggingface/datasets/pull/2290",
"diff_url": "https://github.com/huggingface/datasets/pull/2290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2290.patch",
"merged_at": 1620321925000
} | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2290/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2290/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2289/comments | https://api.github.com/repos/huggingface/datasets/issues/2289/events | https://github.com/huggingface/datasets/pull/2289 | 871,118,573 | MDExOlB1bGxSZXF1ZXN0NjI2MTg5MDU3 | 2,289 | Allow collaborators to self-assign issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"What do you think, @lhoestq? 😉 \r\n\r\nI think this could be another step to facilitate community contributions.",
"@lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...\r\n\r\nAnd sure, this must be documented! I just wanted first to know your opinion..."
] | 1,619,708,826,000 | 1,619,807,296,000 | 1,619,807,296,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2289",
"html_url": "https://github.com/huggingface/datasets/pull/2289",
"diff_url": "https://github.com/huggingface/datasets/pull/2289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2289.patch",
"merged_at": 1619807296000
} | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2289/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2288/comments | https://api.github.com/repos/huggingface/datasets/issues/2288/events | https://github.com/huggingface/datasets/issues/2288 | 871,111,235 | MDU6SXNzdWU4NzExMTEyMzU= | 2,288 | Load_dataset for local CSV files | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# load the dataset and copy the features\r\ndef process(ex):\r\n return {\"tokens\": ast.literal_eval(ex[\"tokens\"]), \"labels\": ast.literal_eval(ex[\"labels\"])}\r\ndataset = dataset.map(process, features=new_features)\r\n```\r\n",
"Hi,\r\n\r\nThanks for the reply.\r\nI have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:\r\n```\r\nArrowInvalid: Could not convert X with type str: tried to convert to int\r\n```\r\nWhy this happens ? Should labels be mapped to their ids and use int instead of str ?",
"Yes, just map the labels to their ids."
] | 1,619,708,470,000 | 1,623,764,966,000 | 1,623,764,966,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
```
The method, loads each list as a string: (i.g "['I' , 'am', 'John']").
To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type
```
new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None))
new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags)))
dataset = dataset.cast(new_features)
```
but I got the following error
```
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
```
Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well.
How can this be solved ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2288/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2287/comments | https://api.github.com/repos/huggingface/datasets/issues/2287/events | https://github.com/huggingface/datasets/pull/2287 | 871,063,374 | MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3 | 2,287 | Avoid copying table's record batches | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !"
] | 1,619,705,701,000 | 1,619,714,063,000 | 1,619,714,062,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2287",
"html_url": "https://github.com/huggingface/datasets/pull/2287",
"diff_url": "https://github.com/huggingface/datasets/pull/2287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2287.patch",
"merged_at": null
} | Fixes #2276 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2287/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2286/comments | https://api.github.com/repos/huggingface/datasets/issues/2286/events | https://github.com/huggingface/datasets/pull/2286 | 871,032,393 | MDExOlB1bGxSZXF1ZXN0NjI2MTE5MTE2 | 2,286 | Fix metadata validation with config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,703,872,000 | 1,619,705,249,000 | 1,619,705,248,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2286",
"html_url": "https://github.com/huggingface/datasets/pull/2286",
"diff_url": "https://github.com/huggingface/datasets/pull/2286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2286.patch",
"merged_at": 1619705248000
} | I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2286/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | {
"login": "danieldiezmallo",
"id": 46021411,
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldiezmallo",
"html_url": "https://github.com/danieldiezmallo",
"followers_url": "https://api.github.com/users/danieldiezmallo/followers",
"following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions",
"organizations_url": "https://api.github.com/users/danieldiezmallo/orgs",
"repos_url": "https://api.github.com/users/danieldiezmallo/repos",
"events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}",
"received_events_url": "https://api.github.com/users/danieldiezmallo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n",
"Resolved"
] | 1,619,702,205,000 | 1,621,408,965,000 | 1,621,408,959,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2284/comments | https://api.github.com/repos/huggingface/datasets/issues/2284/events | https://github.com/huggingface/datasets/pull/2284 | 870,932,710 | MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5 | 2,284 | Initialize Imdb dataset as used in Don't Stop Pretraining Paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,697,158,000 | 1,619,700,874,000 | 1,619,700,874,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2284",
"html_url": "https://github.com/huggingface/datasets/pull/2284",
"diff_url": "https://github.com/huggingface/datasets/pull/2284.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2284.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2284/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2283/comments | https://api.github.com/repos/huggingface/datasets/issues/2283/events | https://github.com/huggingface/datasets/pull/2283 | 870,926,475 | MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,696,694,000 | 1,619,697,024,000 | 1,619,697,024,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2283",
"html_url": "https://github.com/huggingface/datasets/pull/2283",
"diff_url": "https://github.com/huggingface/datasets/pull/2283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2283.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2283/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2282/comments | https://api.github.com/repos/huggingface/datasets/issues/2282/events | https://github.com/huggingface/datasets/pull/2282 | 870,900,332 | MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3 | 2,282 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,695,076,000 | 1,619,696,631,000 | 1,619,696,631,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2282",
"html_url": "https://github.com/huggingface/datasets/pull/2282",
"diff_url": "https://github.com/huggingface/datasets/pull/2282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2282.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2282/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2281/comments | https://api.github.com/repos/huggingface/datasets/issues/2281/events | https://github.com/huggingface/datasets/pull/2281 | 870,792,784 | MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw | 2,281 | Update multi_woz_v22 checksum | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,687,351,000 | 1,619,703,695,000 | 1,619,703,694,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2281",
"html_url": "https://github.com/huggingface/datasets/pull/2281",
"diff_url": "https://github.com/huggingface/datasets/pull/2281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2281.patch",
"merged_at": 1619703694000
} | Fix issue https://github.com/huggingface/datasets/issues/1876
The files were changed in https://github.com/budzianowski/multiwoz/pull/72 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2281/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2280/comments | https://api.github.com/repos/huggingface/datasets/issues/2280/events | https://github.com/huggingface/datasets/pull/2280 | 870,780,431 | MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy | 2,280 | Fixed typo seperate->separate | {
"login": "laksh9950",
"id": 32505743,
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laksh9950",
"html_url": "https://github.com/laksh9950",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind",
"The PR has been merged ! Feel free to merge master into your branch to fix the CI"
] | 1,619,686,546,000 | 1,619,714,482,000 | 1,619,714,476,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2280",
"html_url": "https://github.com/huggingface/datasets/pull/2280",
"diff_url": "https://github.com/huggingface/datasets/pull/2280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2280.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2280/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2279/comments | https://api.github.com/repos/huggingface/datasets/issues/2279/events | https://github.com/huggingface/datasets/issues/2279 | 870,431,662 | MDU6SXNzdWU4NzA0MzE2NjI= | 2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | {
"login": "tginart",
"id": 11379648,
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tginart",
"html_url": "https://github.com/tginart",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"repos_url": "https://api.github.com/users/tginart/repos",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?",
"Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685"
] | 1,619,647,687,000 | 1,619,682,162,000 | 1,619,682,162,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC.
## Steps to reproduce the bug
1. clone the transformers repo
2. move to examples/pytorch/language-modeling
3. run example command:
```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm```
## Expected results
As described in the transformers repo.
## Actual results
```Traceback (most recent call last):
File "run_clm.py", line 34, in <module>
from transformers import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so)
```
## Versions
Paste the output of the following code:
```
- Datasets: 1.6.1
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2279/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2278/comments | https://api.github.com/repos/huggingface/datasets/issues/2278/events | https://github.com/huggingface/datasets/issues/2278 | 870,088,059 | MDU6SXNzdWU4NzAwODgwNTk= | 2,278 | Loss result inGptNeoForCasual | {
"login": "Yossillamm",
"id": 51174606,
"node_id": "MDQ6VXNlcjUxMTc0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yossillamm",
"html_url": "https://github.com/Yossillamm",
"followers_url": "https://api.github.com/users/Yossillamm/followers",
"following_url": "https://api.github.com/users/Yossillamm/following{/other_user}",
"gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions",
"organizations_url": "https://api.github.com/users/Yossillamm/orgs",
"repos_url": "https://api.github.com/users/Yossillamm/repos",
"events_url": "https://api.github.com/users/Yossillamm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yossillamm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library"
] | 1,619,624,392,000 | 1,620,317,663,000 | 1,620,317,663,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Is there any way you give the " loss" and "logits" results in the gpt neo api? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2278/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2277/comments | https://api.github.com/repos/huggingface/datasets/issues/2277/events | https://github.com/huggingface/datasets/pull/2277 | 870,071,994 | MDExOlB1bGxSZXF1ZXN0NjI1MzI5NjIz | 2,277 | Create CacheManager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [] | 1,619,623,422,000 | 1,630,560,811,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2277",
"html_url": "https://github.com/huggingface/datasets/pull/2277",
"diff_url": "https://github.com/huggingface/datasets/pull/2277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2277.patch",
"merged_at": null
} | Perform refactoring to decouple cache functionality (method `as_dataset`). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2277/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2276/comments | https://api.github.com/repos/huggingface/datasets/issues/2276/events | https://github.com/huggingface/datasets/issues/2276 | 870,010,511 | MDU6SXNzdWU4NzAwMTA1MTE= | 2,276 | concatenate_datasets loads all the data into memory | {
"login": "TaskManager91",
"id": 7063207,
"node_id": "MDQ6VXNlcjcwNjMyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TaskManager91",
"html_url": "https://github.com/TaskManager91",
"followers_url": "https://api.github.com/users/TaskManager91/followers",
"following_url": "https://api.github.com/users/TaskManager91/following{/other_user}",
"gists_url": "https://api.github.com/users/TaskManager91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TaskManager91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TaskManager91/subscriptions",
"organizations_url": "https://api.github.com/users/TaskManager91/orgs",
"repos_url": "https://api.github.com/users/TaskManager91/repos",
"events_url": "https://api.github.com/users/TaskManager91/events{/privacy}",
"received_events_url": "https://api.github.com/users/TaskManager91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n<ipython-input-6-9766d77530b9> in <module>\r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <genexpr>(.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in <listcomp>(.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```",
"Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ",
"@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```",
"Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed",
"Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues",
"We just released `datasets` 1.6.2 that includes the fix :)",
"thanks it works like a charm! :)"
] | 1,619,620,041,000 | 1,620,031,315,000 | 1,620,031,315,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.

## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_from_disk
test_sampled_pro = load_from_disk("test_sampled_pro")
val_sampled_pro = load_from_disk("val_sampled_pro")
big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])
# Loaded to memory
big_set.save_to_disk("big_set")
# Loaded to memory
big_set = concatenate_datasets([big_set, val_sampled_pro])
```
## Expected results
The data should be loaded into memory in batches and then saved directly to disk.
## Actual results
The entire data set is loaded into the memory and then saved to the hard disk.
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.1
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2276/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2275/comments | https://api.github.com/repos/huggingface/datasets/issues/2275/events | https://github.com/huggingface/datasets/issues/2275 | 869,378,311 | MDU6SXNzdWU4NjkzNzgzMTE= | 2,275 | SNLI dataset has labels of -1 | {
"login": "puzzler10",
"id": 17426779,
"node_id": "MDQ6VXNlcjE3NDI2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puzzler10",
"html_url": "https://github.com/puzzler10",
"followers_url": "https://api.github.com/users/puzzler10/followers",
"following_url": "https://api.github.com/users/puzzler10/following{/other_user}",
"gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions",
"organizations_url": "https://api.github.com/users/puzzler10/orgs",
"repos_url": "https://api.github.com/users/puzzler10/repos",
"events_url": "https://api.github.com/users/puzzler10/events{/privacy}",
"received_events_url": "https://api.github.com/users/puzzler10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!"
] | 1,619,569,945,000 | 1,621,258,458,000 | 1,621,258,458,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.
It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained.
Perhaps the documentation should be updated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2275/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2274/comments | https://api.github.com/repos/huggingface/datasets/issues/2274/events | https://github.com/huggingface/datasets/pull/2274 | 869,186,276 | MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx | 2,274 | Always update metadata in arrow schema | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,551,317,000 | 1,619,690,271,000 | 1,619,690,270,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2274",
"html_url": "https://github.com/huggingface/datasets/pull/2274",
"diff_url": "https://github.com/huggingface/datasets/pull/2274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2274.patch",
"merged_at": 1619690270000
} | We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.
For each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date.
I also added a line to update the metadata directly in the Dataset.__init__ method.
This way even a dataset instantiated with __init__ will have a table with the right metadata.
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2274/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2273/comments | https://api.github.com/repos/huggingface/datasets/issues/2273/events | https://github.com/huggingface/datasets/pull/2273 | 869,046,290 | MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1 | 2,273 | Added CUAD metrics | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,542,152,000 | 1,619,704,787,000 | 1,619,704,787,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2273",
"html_url": "https://github.com/huggingface/datasets/pull/2273",
"diff_url": "https://github.com/huggingface/datasets/pull/2273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2273.patch",
"merged_at": 1619704787000
} | `EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2273/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2272/comments | https://api.github.com/repos/huggingface/datasets/issues/2272/events | https://github.com/huggingface/datasets/issues/2272 | 869,017,977 | MDU6SXNzdWU4NjkwMTc5Nzc= | 2,272 | Bug in Dataset.class_encode_column | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore"
] | 1,619,539,998,000 | 1,619,787,267,000 | 1,619,787,267,000 | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2272/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2271/comments | https://api.github.com/repos/huggingface/datasets/issues/2271/events | https://github.com/huggingface/datasets/issues/2271 | 869,002,141 | MDU6SXNzdWU4NjkwMDIxNDE= | 2,271 | Synchronize table metadata with features | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"See PR #2274 "
] | 1,619,538,913,000 | 1,619,614,105,000 | null | MEMBER | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | **Is your feature request related to a problem? Please describe.**
As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767):
> Metadata stored in the schema is just a redundant information regarding the feature types.
It is used when calling Dataset.from_file to know which feature types to use.
These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`.
However this something that's almost never tested properly.
**Describe the solution you'd like**
We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2271/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2270/comments | https://api.github.com/repos/huggingface/datasets/issues/2270/events | https://github.com/huggingface/datasets/pull/2270 | 868,913,660 | MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky | 2,270 | Fix iterable interface expected by numpy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is already on master"
] | 1,619,534,156,000 | 1,619,631,567,000 | 1,619,631,567,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2270",
"html_url": "https://github.com/huggingface/datasets/pull/2270",
"diff_url": "https://github.com/huggingface/datasets/pull/2270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2270.patch",
"merged_at": null
} | Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2270/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2269/comments | https://api.github.com/repos/huggingface/datasets/issues/2269/events | https://github.com/huggingface/datasets/pull/2269 | 868,878,468 | MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3 | 2,269 | Fix query table with iterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,531,978,000 | 1,619,533,317,000 | 1,619,533,316,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2269",
"html_url": "https://github.com/huggingface/datasets/pull/2269",
"diff_url": "https://github.com/huggingface/datasets/pull/2269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2269.patch",
"merged_at": 1619533316000
} | The benchmark runs are failing on master because it tries to use an iterable to query the dataset.
However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable.
This PR fixes it | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2269/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2268/comments | https://api.github.com/repos/huggingface/datasets/issues/2268/events | https://github.com/huggingface/datasets/pull/2268 | 868,773,380 | MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1 | 2,268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq note that the segfault also occurs on Linux.",
"Created the ticket at\r\nhttps://issues.apache.org/jira/browse/ARROW-12568",
"@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems."
] | 1,619,524,708,000 | 1,623,501,889,000 | 1,619,531,000,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2268",
"html_url": "https://github.com/huggingface/datasets/pull/2268",
"diff_url": "https://github.com/huggingface/datasets/pull/2268.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2268.patch",
"merged_at": 1619531000000
} | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2268/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2268/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2267/comments | https://api.github.com/repos/huggingface/datasets/issues/2267/events | https://github.com/huggingface/datasets/issues/2267 | 868,291,129 | MDU6SXNzdWU4NjgyOTExMjk= | 2,267 | DatasetDict save load Failing test in 1.6 not in 1.5 | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting ! We're looking into it",
"I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?",
"Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\nds = load_dataset('super_glue', 'multirc')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\n```bash\r\nReusing dataset super_glue (/home/idahl/.cache/huggingface/datasets/super_glue/multirc/1.0.2/2fb163bca9085c1deb906aff20f00c242227ff704a4e8c9cfdfe820be3abfc83)\r\nTraceback (most recent call last):\r\n File \"/home/idahl/eval-util-expl/multirc/tmp.py\", line 7, in <module>\r\n ds = DatasetDict.load_from_disk('tempds')\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/dataset_dict.py\", line 710, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 687, in load_from_disk\r\n return Dataset(\r\n File \"/home/idahl/miniconda3/envs/eval-util-expl/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 274, in __init__\r\n raise ValueError(\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'answer': Value(dtype='string', id=None), 'idx': {'answer': Value(dtype='int32', id=None), 'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None)}, 'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<answer: int32, paragraph: int32, question: int32>, label: int64, paragraph: string, question: string>\r\n\r\nbut expected something like\r\n{'answer': Value(dtype='string', id=None), 'idx': {'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None), 'answer': Value(dtype='int32', id=None)}, 'label': Value(dtype='int64', id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct<answer: string, idx: struct<paragraph: int32, question: int32, answer: int32>, label: int64, paragraph: string, question: string>\r\n\r\n```\r\n\r\nThe non-matching part seems to be\r\n`'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None),`\r\nvs \r\n`'label': Value(dtype='int64', id=None),`\r\n\r\nAnd the order in the `<struct...` being different, which might cause the [features.type != inferred_features.type](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L274) condition to become true and raise this ValueError.\r\n\r\n\r\nI am using datasets version 1.6.2.\r\n\r\nEdit: can confirm, this works without error in version 1.5.0",
"My current workaround is to remove the idx feature:\r\n\r\n```\r\n\r\nfrom datasets import load_dataset, DatasetDict, Value\r\nds = load_dataset('super_glue', 'multirc')\r\nds = ds.remove_columns('idx')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\nworks.",
"It looks like this issue comes from the order of the fields in the 'idx' struct that is different for some reason.\r\nI'm looking into it. Note that as a workaround you can also flatten the nested features with `ds = ds.flatten()`",
"I just pushed a fix on `master`. We'll do a new release soon !\r\n\r\nThanks for reporting"
] | 1,619,481,805,000 | 1,622,215,654,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.save_to_disk(path)
ds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6
```
## Expected results
Upgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk.
## Actual results
```
# Infer features if None
inferred_features = Features.from_arrow_schema(arrow_table.schema)
if self.info.features is None:
self.info.features = inferred_features
# Infer fingerprint if None
if self._fingerprint is None:
self._fingerprint = generate_fingerprint(self)
# Sanity checks
assert self.features is not None, "Features can't be None in a Dataset object"
assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
if self.info.features.type != inferred_features.type:
> raise ValueError(
"External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
self.info.features, self.info.features.type, inferred_features, inferred_features.type
)
)
E ValueError: External features info don't match the dataset:
E Got
E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]}
E with type
E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<child: int64, child_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, color: string, head: int64, head_span: struct<end: int64, label: string, start: int64, token_end: int64, token_start: int64>, label: string>>, spans: list<item: struct<end: int64, label: string, start: int64, text: string, token_end: int64, token_start: int64, type: string>>, text: string, tokens: list<item: struct<disabled: bool, end: int64, id: int64, start: int64, text: string, ws: bool>>>
E
E but expected something like
E {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]}
E with type
E struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list<item: int64>, encoding__offsets: list<item: list<item: int64>>, encoding__overflowing: list<item: null>, encoding__tokens: list<item: string>, encoding__words: list<item: int64>, ner_ids: list<item: int64>, ner_labels: list<item: string>, relations: list<item: struct<head: int64, child: int64, head_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, child_span: struct<start: int64, end: int64, token_start: int64, token_end: int64, label: string>, color: string, label: string>>, spans: list<item: struct<text: string, start: int64, token_start: int64, token_end: int64, end: int64, type: string, label: string>>, text: string, tokens: list<item: struct<text: string, start: int64, end: int64, id: int64, ws: bool, disabled: bool>>>
../../../../../.virtualenvs/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:274: ValueError
```
## Versions
- Datasets: 1.6.1
- Python: 3.8.5 (default, Jan 26 2021, 10:01:04)
[Clang 12.0.0 (clang-1200.0.32.2)]
- Platform: macOS-10.15.7-x86_64-i386-64bit
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2267/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2266/comments | https://api.github.com/repos/huggingface/datasets/issues/2266/events | https://github.com/huggingface/datasets/pull/2266 | 867,864,353 | MDExOlB1bGxSZXF1ZXN0NjIzNDY1OTI5 | 2,266 | Make tests run faster | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"LOL, I was also working on something similar 😅. I'm gonna have a look!!!",
"Sorry I didn't know you were also working on it ^^'\r\nAnd yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite are unit tests and which ones are integration tests\r\n",
"Never mind: we both noticed tests can be improved. More PRs to come... 😉 \r\n\r\nAccording to the literature, unit tests are those that test a behavior unit, isolated from the other components and must be very fast: for me, this last requirement implies that they must be performed completely _in memory_.\r\n\r\nAs opposed, integration tests are those which also test interactions with _external_ components, like web services, databases, file system, etc.\r\n\r\nThe problem I see is that our code is still too coupled and it is difficult to isolate components for testing. Therefore, I would suggest acting iteratively, by refactoring to decouple components and then implement unit tests for each component in isolation."
] | 1,619,452,540,000 | 1,619,690,413,000 | 1,619,690,404,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2266",
"html_url": "https://github.com/huggingface/datasets/pull/2266",
"diff_url": "https://github.com/huggingface/datasets/pull/2266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2266.patch",
"merged_at": 1619690404000
} | From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2266/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2266/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2265/comments | https://api.github.com/repos/huggingface/datasets/issues/2265/events | https://github.com/huggingface/datasets/pull/2265 | 867,490,646 | MDExOlB1bGxSZXF1ZXN0NjIzMTUyOTg5 | 2,265 | Update black | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,429,709,000 | 1,619,430,468,000 | 1,619,430,467,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2265",
"html_url": "https://github.com/huggingface/datasets/pull/2265",
"diff_url": "https://github.com/huggingface/datasets/pull/2265.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2265.patch",
"merged_at": 1619430467000
} | Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib.
This makes the CI currently fail on master | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2265/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2264/comments | https://api.github.com/repos/huggingface/datasets/issues/2264/events | https://github.com/huggingface/datasets/pull/2264 | 867,476,228 | MDExOlB1bGxSZXF1ZXN0NjIzMTQwODA1 | 2,264 | Fix memory issue in multiprocessing: Don't pickle table index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"The code quality check is going to be fixed by #2265 ",
"The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.\r\nTherefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue.",
"I'm still investigating why we didn't catch this issue in the tests.\r\nThis test should have caught it but didn't:\r\n\r\nhttps://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/tests/test_table.py#L350-L353",
"I'll focus on the patch release and fix the test in another PR after the release",
"Yes, I think it is better that way..."
] | 1,619,428,895,000 | 1,619,433,028,000 | 1,619,431,694,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2264",
"html_url": "https://github.com/huggingface/datasets/pull/2264",
"diff_url": "https://github.com/huggingface/datasets/pull/2264.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2264.patch",
"merged_at": 1619431694000
} | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2264/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2264/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2263/comments | https://api.github.com/repos/huggingface/datasets/issues/2263/events | https://github.com/huggingface/datasets/pull/2263 | 867,420,912 | MDExOlB1bGxSZXF1ZXN0NjIzMDk0NTcy | 2,263 | test data added, dataset_infos updated | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,425,638,000 | 1,619,688,621,000 | 1,619,688,620,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2263",
"html_url": "https://github.com/huggingface/datasets/pull/2263",
"diff_url": "https://github.com/huggingface/datasets/pull/2263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2263.patch",
"merged_at": 1619688620000
} | Fixes #2262. Thanks for pointing out issue with dataset @jinmang2! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2263/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2263/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2262/comments | https://api.github.com/repos/huggingface/datasets/issues/2262/events | https://github.com/huggingface/datasets/issues/2262 | 867,325,351 | MDU6SXNzdWU4NjczMjUzNTE= | 2,262 | NewsPH NLI dataset script fails to access test data. | {
"login": "jinmang2",
"id": 37775784,
"node_id": "MDQ6VXNlcjM3Nzc1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinmang2",
"html_url": "https://github.com/jinmang2",
"followers_url": "https://api.github.com/users/jinmang2/followers",
"following_url": "https://api.github.com/users/jinmang2/following{/other_user}",
"gists_url": "https://api.github.com/users/jinmang2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinmang2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinmang2/subscriptions",
"organizations_url": "https://api.github.com/users/jinmang2/orgs",
"repos_url": "https://api.github.com/users/jinmang2/repos",
"events_url": "https://api.github.com/users/jinmang2/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinmang2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."
] | 1,619,419,481,000 | 1,619,688,723,000 | 1,619,688,620,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If you download it according to the script above, you can see that train and test receive the same data as shown below.
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
```
In local, I modified the code of the source as below and got the correct result.
```python
71 test_path = os.path.join(download_path, "test.csv")
```
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',
'label': 1,
'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}
```
I don't have experience with open source pull requests, so I suggest that you reflect them in the source.
Thank you for reading :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2262/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2261/comments | https://api.github.com/repos/huggingface/datasets/issues/2261/events | https://github.com/huggingface/datasets/pull/2261 | 867,088,818 | MDExOlB1bGxSZXF1ZXN0NjIyODIxNzQw | 2,261 | Improve ReadInstruction logic and update docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Ready for the final review"
] | 1,619,377,646,000 | 1,621,275,884,000 | 1,621,270,137,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2261",
"html_url": "https://github.com/huggingface/datasets/pull/2261",
"diff_url": "https://github.com/huggingface/datasets/pull/2261.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2261.patch",
"merged_at": 1621270137000
} | Improve ReadInstruction logic and docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2261/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2260/comments | https://api.github.com/repos/huggingface/datasets/issues/2260/events | https://github.com/huggingface/datasets/pull/2260 | 866,961,697 | MDExOlB1bGxSZXF1ZXN0NjIyNzMwODYx | 2,260 | GooAQ dataset added | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for adding this one !\r\nThe download manager does support downloading files on git lfs via their github url. No need for a manual download option ;)"
] | 1,619,342,808,000 | 1,620,376,577,000 | 1,620,376,577,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2260",
"html_url": "https://github.com/huggingface/datasets/pull/2260",
"diff_url": "https://github.com/huggingface/datasets/pull/2260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2260.patch",
"merged_at": 1620376577000
} | @lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2260/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2259/comments | https://api.github.com/repos/huggingface/datasets/issues/2259/events | https://github.com/huggingface/datasets/pull/2259 | 866,880,092 | MDExOlB1bGxSZXF1ZXN0NjIyNjc2ODA0 | 2,259 | Add support for Split.ALL | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"sst\", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError\r\n```\r\n\r\nEDIT:\r\nActually, think it's OK to merge this PR because the fix will not touch this PR's code."
] | 1,619,315,142,000 | 1,624,868,487,000 | 1,624,868,487,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2259",
"html_url": "https://github.com/huggingface/datasets/pull/2259",
"diff_url": "https://github.com/huggingface/datasets/pull/2259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2259.patch",
"merged_at": 1624868487000
} | The title says it all. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2259/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2258/comments | https://api.github.com/repos/huggingface/datasets/issues/2258/events | https://github.com/huggingface/datasets/pull/2258 | 866,870,588 | MDExOlB1bGxSZXF1ZXN0NjIyNjcxNTQy | 2,258 | Fix incorrect update_metadata_with_features calls in ArrowDataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future."
] | 1,619,311,718,000 | 1,619,457,390,000 | 1,619,456,044,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2258",
"html_url": "https://github.com/huggingface/datasets/pull/2258",
"diff_url": "https://github.com/huggingface/datasets/pull/2258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2258.patch",
"merged_at": 1619456044000
} | Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2258/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2257/comments | https://api.github.com/repos/huggingface/datasets/issues/2257/events | https://github.com/huggingface/datasets/pull/2257 | 866,755,203 | MDExOlB1bGxSZXF1ZXN0NjIyNTkwMDQw | 2,257 | added metrics for CUAD | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"> For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here\r\n\r\n@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if exact match is also added since the standard SQUAD dataset also has it.",
"I would like to quote it from the website that I am following to learn\nthese things.\nExact Match:\nThis metric is as simple as it sounds. For each question+answer pair, if\nthe characters of the model's prediction exactly match the characters of\n*(one\nof) the True Answer(s)*, EM = 1, otherwise EM = 0. This is a strict\nall-or-nothing metric; being off by a single character results in a score\nof 0. When assessing against a negative example, if the model predicts any\ntext at all, it automatically receives a 0 for that example.\n\nSo, I guess you need to ensure at least 1 predicted answer matches for EM\nto be 1.\nSource:\nhttps://qa.fastforwardlabs.com/no%20answer/null%20threshold/bert/distilbert/exact%20match/f1/robust%20predictions/2020/06/09/Evaluating_BERT_on_SQuAD.html\n\nYou can go to their homepage and read the other links. They have detailed\nexplanations on evaluation metrics. You can also have a look at the\nsquad_v2 metric file for further clarification.\n\nRegards,\nMohammed Rakib\n\nOn Sun, 25 Apr 2021 at 15:20, Bhavitvya Malik ***@***.***>\nwrote:\n\n> I'm a little confused when it comes to 2 ground truths which can be a\n> possible answer. Like here for eg.\n>\n> predictions = [{'prediction_text': ['The seller:', 'The buyer/End-User:\n> Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> references = [{'answers': {'answer_start': [143, 49], 'text': ['The\n> seller:', 'The buyer/End-User: Shenzhen LOHAS Supply Chain Management Co.,\n> Ltd.']}, 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> Should I ensure at least 1 predicted answer matches or both predicted\n> answers should match (like in this case) for EM to be 1?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2257#issuecomment-826289753>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHMYZAZSAEZNFWEMVAPK6M3TKPNHLANCNFSM43QFZVPQ>\n> .\n>\n",
"Updated the same @MohammedRakib! Even if a single answer matches I'm returning 1 in that case for EM (not traversing all predictions once we have one `exact_match` from prediction)"
] | 1,619,273,394,000 | 1,619,690,018,000 | 1,619,540,192,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2257",
"html_url": "https://github.com/huggingface/datasets/pull/2257",
"diff_url": "https://github.com/huggingface/datasets/pull/2257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2257.patch",
"merged_at": null
} | For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2257/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2256/comments | https://api.github.com/repos/huggingface/datasets/issues/2256/events | https://github.com/huggingface/datasets/issues/2256 | 866,708,609 | MDU6SXNzdWU4NjY3MDg2MDk= | 2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | {
"login": "roskoN",
"id": 8143425,
"node_id": "MDQ6VXNlcjgxNDM0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roskoN",
"html_url": "https://github.com/roskoN",
"followers_url": "https://api.github.com/users/roskoN/followers",
"following_url": "https://api.github.com/users/roskoN/following{/other_user}",
"gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roskoN/subscriptions",
"organizations_url": "https://api.github.com/users/roskoN/orgs",
"repos_url": "https://api.github.com/users/roskoN/repos",
"events_url": "https://api.github.com/users/roskoN/events{/privacy}",
"received_events_url": "https://api.github.com/users/roskoN/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Thanks for reporting ! We are working on this and we'll do a patch release very soon.",
"We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)"
] | 1,619,258,180,000 | 1,619,457,135,000 | 1,619,457,135,000 | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
def _prepare_sample(batch):
return {"input_ids": list(), "attention_mask": list()}
for split_name, dataset_split in list(dstc8_datset.items()):
print(f"Processing {split_name}")
encoded_dataset_split = dataset_split.map(
function=_prepare_sample,
batched=True,
num_proc=4,
remove_columns=dataset_split.column_names,
batch_size=10,
writer_batch_size=10,
keep_in_memory=False,
)
print(encoded_dataset_split)
path = f"./data/encoded_{split_name}"
encoded_dataset_split.save_to_disk(path)
```
## Expected results
Memory usage should stay within reasonable boundaries.
## Actual results
This is htop-output from running the provided script.

## Versions
```
- Datasets: 1.6.0
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10
```
Running on WSL2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2256/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2255/comments | https://api.github.com/repos/huggingface/datasets/issues/2255/events | https://github.com/huggingface/datasets/pull/2255 | 866,242,892 | MDExOlB1bGxSZXF1ZXN0NjIyMTc0Njg4 | 2,255 | Task casting for text classification & question answering | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"cc @abhi1thakur ",
"Looks really nice so far, thanks !\r\nMaybe if a dataset doesn't have a template for a specific task we could try the default template of this task ?",
"hey @SBrandeis @lhoestq,\r\n\r\ni now have a better idea about what you guys are trying to achieve with the task templates and have a few follow-up questions:\r\n\r\n1. how did you envision using `DatasetInfo` for running evaluation? my understanding is that all `dataset_infos.json` files are stored in the `datasets` repo (unlike `transformers` where each model's weights etc are stored in a dedicated repo). \r\nthis suggests the following workflow:\r\n\r\n```\r\n- git clone datasets\r\n- load target dataset to evaluate\r\n- load `dataset_infos.json` for target dataset\r\n- run eval for each task template in `task_templates`\r\n- store metrics as evaluation cards (similar to what is done in `autonlp`)\r\n```\r\n2. assuming the above workflow, i see that the current `TaskTemplate` attributes of `task`, `input_schema`, and `label_schema` still require some wrangling from `dataset_infos.json` to reproduce additional mappings like `label2id` that we'd need for e.g. text classification. an alternative would be to instantiate the task template class directly from the JSON with something like\r\n```python\r\nfrom datasets.tasks import TextClassification\r\nfrom transformers import AutoModelForSequenceClassification, AutoConfig\r\n\r\ntc = TextClassification.from_json(\"path/to/dataset_infos.json\")\r\n# load a model with the desired config\r\nmodel_ckpt = ...\r\nconfig = AutoConfig.from_pretrained(model_ckpt, label2id=tc.label2id, id2label=tc.id2label)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_ckpt, config=config)\r\n# run eval ...\r\n```\r\nperhaps this is what @SBrandeis had in mind with the `TaskTemplate.from_dict` method?\r\n\r\n3. i personally prefer using `task_templates` over `supervised_keys` because it encourages the contributor to think in terms of 1 or more tasks. my question here is do we currently use `supervised_keys` for anything important in the `datasets` library?",
"1. How do you envision using DatasetInfo for running evaluation?\r\n\r\nThe initial idea was to be able to do something like this:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"name\", task=\"binary_classification\")\r\n# OR\r\ndset = load_dataset(\"name\")\r\ndset = dset.prepare_for_task(\"binary_classification\")\r\n```\r\n\r\n2. I don't think that's needed if we proceed as mentioned above\r\n\r\n3. `supervised_keys` are mostly a legacy compatibility thing with TF datasets, not sure it's used for anything right now. I'll let @lhoestq give more details on that\r\n\r\n[Edit 1] Typo",
"> The initial idea was to be able to do something like this:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> dset = load_dataset(\"name\", task=\"binary_classification\")\r\n> # OR\r\n> dset = load_dataset(\"name\")\r\n> dset = dset.prepare_for_task(\"binary_classification\")\r\n> ```\r\n\r\nah that's very elegant! just so i've completely understood, the result would be that the relevant column names of `dset` would be mapped to e.g. `text` and `label` and thus we'd have a uniform schema for the evaluation of all `binary_classification` tasks?",
"That's correct! Also, the features need to be appropriately casted\r\nFor a classification task for example, we would need to cast the datasets features to something like this:\r\n```python\r\ndatasets.Features({\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.ClassLabel(names=[...]),\r\n})\r\n```\r\n",
"3. We can ignore `supervised_keys` (it came from TFDS and we're not using it) and use `task_templates`",
"great, thanks a lot for your answers! now it's much clearer what i need to do next 😃 ",
"hey @lhoestq @SBrandeis, \r\n\r\ni've made some small tweaks to @SBrandeis's code so that `Dataset.prepare_for_task` is called in `DatasetBuilder`. using the `emotion` dataset as a test case, the following now works:\r\n\r\n ```python\r\n# DatasetDict with default columns\r\nds = load_dataset(\"./datasets/emotion/\")\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# DatasetDict with remapped columns\r\nds = load_dataset(\"./datasets/emotion/\", task=\"text_classification\")\r\nDatasetDict({\r\n# train: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# Dataset with default columns\r\nds = load_dataset(\"./datasets/emotion/\", split=\"train\")\r\n# Map/cast features\r\nds = ds.prepare_for_task(\"text_classification\")\r\n# Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n```\r\n\r\ni have a few follow-up questions / remarks:\r\n\r\n1. i'm working under the assumption that contributors / users only provide a unique set of task types. in particular, the current implementation does not support something like:\r\n```python\r\ntask_templates=[TextClassification(labels=class_names, text_column=\"tweet\", label_column=\"emotion\"), TextClassification(labels=class_names, text_column=\"some_other_column\", label_column=\"some_other_column\")]\r\n```\r\nsince we use `TaskTemplate.task` and the filter for compatible templates in `Dataset.prepare_for_task`. should we support these scenarios? my hunch is that this is rare in practice, but please correct me if i'm wrong.\r\n\r\n2. when we eventually run evaluation for `transformers` models, i expect we'll be using the `Trainer` for which we can pass the standard label names to `TrainingArguments.label_names`. if that's the case, it might be prudent to heed the warning from the [docs](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainer#trainer) and use `labels` instead of `label` in the schema:\r\n> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named \"label\".\r\n\r\n3. i plan to forge ahead on the rest of the pipeline taxonomy. please let me know if you'd prefer smaller, self-contained pull requests (e.g. one per task)",
"hey @lhoestq @SBrandeis, i think this is ready for another review 😃 \r\n\r\nin addition to a few comments / questions i've left in the pr, here's a few remarks:\r\n\r\n1. after some experimentation, i decided against allowing the user to specify nested column names for question-answering. i couldn't find a simple solution with the current api and suspect that i'd have to touch many areas of `datasets` to \"unflatten\" columns in a generic fashion.\r\n2. in the current implementation, the user can specify the outer column name for question-answering, but is expected to follow the inner schema for e.g. `answers.text` and `answers.answer_start`. we can decide later how much flexibility we want to give users\r\n3. i added a few unit tests\r\n4. as discussed, let's keep this pr focused on text classification / question answering and i'll add the other tasks in separate prs\r\n5. i renamed the tasks e.g. `text_classification` -> `text-classification` for consistency with the `Trainer` model cards [here](https://github.com/huggingface/transformers/pull/11599#pullrequestreview-656371007).",
"i'm not sure why the benchmarks are getting cancelled - is this expected?",
"> i'm not sure why the benchmarks are getting cancelled - is this expected?\r\n\r\nHmm I don't know. It's certainly unrelated to this PR though. Maybe github has some issues",
"Something is happening with actions: https://www.githubstatus.com/",
"hey @lhoestq and @SBrandeis, i've: \r\n\r\n* extended the `prepare_for_task` API along the lines that @lhoestq suggested. i wasn't entirely sure what the `datasets` convention is for docstrings with mixed types, so please see if my proposal makes sense\r\n* added a few new tests to check that we trigger the value errors on incorrect input\r\n\r\ni think this is ready for another review :)",
"> Looks all good thank you :)\r\n> \r\n> Can you also add `prepare_for_task` in the `main_classes.rst` file of the documentation ?\r\n\r\nDone! I also remembered that I needed to do the same for `DatasetDict`, so included this as well :)"
] | 1,619,193,641,000 | 1,621,344,696,000 | 1,621,344,695,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2255",
"html_url": "https://github.com/huggingface/datasets/pull/2255",
"diff_url": "https://github.com/huggingface/datasets/pull/2255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2255.patch",
"merged_at": 1621344695000
} | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2255/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2254/comments | https://api.github.com/repos/huggingface/datasets/issues/2254/events | https://github.com/huggingface/datasets/pull/2254 | 866,169,312 | MDExOlB1bGxSZXF1ZXN0NjIyMTE1NDI0 | 2,254 | Update format, fingerprint and indices after add_item | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"I renamed the variable, added a test for dataset._indices and fixed an issue with class_encode_column"
] | 1,619,188,309,000 | 1,619,541,049,000 | 1,619,541,048,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2254",
"html_url": "https://github.com/huggingface/datasets/pull/2254",
"diff_url": "https://github.com/huggingface/datasets/pull/2254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2254.patch",
"merged_at": 1619541048000
} | Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2254/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2254/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2253/comments | https://api.github.com/repos/huggingface/datasets/issues/2253/events | https://github.com/huggingface/datasets/pull/2253 | 866,034,321 | MDExOlB1bGxSZXF1ZXN0NjIyMDA2Njg3 | 2,253 | Perform minor refactoring: use config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq is there a problem in the master branch? I got a segmentation fault...\r\n```\r\ntests/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault\r\n```",
"Oh wow. Let me re-run the CI just to make sure",
"Hmm interesting, the segfault is still there. I'm investigating this issue on my windows machine",
"Feel free to merge master into this branch to fix the CI :)"
] | 1,619,178,347,000 | 1,622,106,765,000 | 1,619,535,779,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2253",
"html_url": "https://github.com/huggingface/datasets/pull/2253",
"diff_url": "https://github.com/huggingface/datasets/pull/2253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2253.patch",
"merged_at": 1619535778000
} | Perform minor refactoring related to `config`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2253/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2252/comments | https://api.github.com/repos/huggingface/datasets/issues/2252/events | https://github.com/huggingface/datasets/issues/2252 | 865,870,710 | MDU6SXNzdWU4NjU4NzA3MTA= | 2,252 | Slow dataloading with big datasets issue persists | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(...) # or from load_dataset...\r\n\r\n_start = time.time()\r\nn = 100\r\nfor i in np.random.default_rng(42).integers(0, len(dataset), size=n):\r\n _ = dataset[i]\r\nprint(time.time() - _start)\r\n```\r\n\r\nIf we see a significant speed difference between your two datasets then it would mean that there's an issue somewhere",
"Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:\r\n* 60GB\r\n```\r\nloading took: 22.618776321411133\r\nramdom indexing 100 times took: 0.10214924812316895\r\n```\r\n\r\n* 600GB\r\n```\r\nloading took: 1176.1764674186707\r\nramdom indexing 100 times took: 2.853600025177002\r\n```\r\n\r\nHmm.. I double checked that it's version 1.6.0. The difference seems quite big, could it be related to the running environment? \r\n",
"I'm surprised by the speed change. Can you give more details about your dataset ?\r\nThe speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.\r\nYou can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory).\r\n\r\nAlso can you explain what parameters you used if you used `map` calls ?\r\nAlso if you have some code that reproduces the issue I'd be happy to investigate it.",
"Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD",
"Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeling).\r\n```\r\nlen(batches):\r\n492763\r\n\r\nbatches[0]: \r\npyarrow.RecordBatch\r\nattention_mask: list<item: uint8>\r\n child 0, item: uint8\r\ninput_ids: list<item: int16>\r\n child 0, item: int16\r\nspecial_tokens_mask: list<item: uint8>\r\n child 0, item: uint8\r\ntoken_type_ids: list<item: uint8>\r\n child 0, item: uint8\r\n```\r\n\r\nHere the some parameters to `map` function just in case it is relevant:\r\n```\r\nnum_proc=1 # as multi processing is slower in my case\r\nload_from_cache_file=False\r\n```\r\n",
"Regarding the environment, I am running the code on a cloud server. Here are some info:\r\n```\r\nUbuntu 18.04.5 LTS # cat /etc/issue\r\npyarrow 3.0.0 # pip list | grep pyarrow\r\n```\r\nThe data is stored in SSD and it is mounted to the machine via Network File System.\r\n\r\nIf you could point me to some of the commands to check the details of the environment, I would be happy to provide relevant information @lhoestq !",
"I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.\r\n\r\n```python\r\nclass MyModel(pytorch_lightning.LightningModule)\r\n def setup(self, stage):\r\n self.dataset = datasets.load_from_disk(path)\r\n self.dataset.set_format(\"torch\")\r\n\r\n def train_dataloader(self):\r\n collate_fn = transformers.DataCollatorForLanguageModeling(\r\n tokenizer=transformers.ElectraTokenizerFast.from_pretrained(tok_path)\r\n )\r\n dataloader = torch.utils.DataLoader(\r\n self.dataset,\r\n batch_size=32,\r\n collate_fn=collate_fn,\r\n num_workers=8,\r\n pin_memory=True,\r\n )\r\n```",
"Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?\r\nI'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs",
"Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x.",
"@lhoestq and @hwijeen\r\n\r\nDespite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.\r\n\r\nStack details:\r\n=========\r\n\r\n> GCC version: Could not collect\r\n> Clang version: Could not collect\r\n> CMake version: Could not collect\r\n> \r\n> Python version: 3.7 (64-bit runtime)\r\n> Is CUDA available: True\r\n> CUDA runtime version: 10.2.89\r\n> GPU models and configuration: GPU 0: GeForce GTX 1050\r\n> Nvidia driver version: 457.63\r\n> cuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin\\cudnn64_7.dll\r\n> HIP runtime version: N/A\r\n> MIOpen runtime version: N/A\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] datasets==1.6.2\r\n> [pip3] transformers==4.5.1\r\n> [pip3] numpy==1.19.1\r\n> [pip3] numpydoc==1.1.0\r\n> [pip3] pytorch-metric-learning==0.9.98\r\n> [pip3] torch==1.8.1\r\n> [pip3] torchaudio==0.8.1\r\n> [pip3] torchvision==0.2.2\r\n> [conda] blas 2.16 mkl conda-forge\r\n> [conda] cudatoolkit 10.2.89 hb195166_8 conda-forge\r\n> [conda] libblas 3.8.0 16_mkl conda-forge\r\n> [conda] libcblas 3.8.0 16_mkl conda-forge\r\n> [conda] liblapack 3.8.0 16_mkl conda-forge\r\n> [conda] liblapacke 3.8.0 16_mkl conda-forge\r\n> [conda] mkl 2020.1 216\r\n> [conda] numpy 1.19.1 py37hae9e721_0 conda-forge\r\n> [conda] numpydoc 1.1.0 py_1 conda-forge\r\n> [conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch\r\n> [conda] pytorch-metric-learning 0.9.98 pyh39e3cac_0 metric-learning\r\n> [conda] torchaudio 0.8.1 py37 pytorch\r\n> [conda] torchvision 0.2.2 py_3 pytorch",
"Hi @BenoitDalFerro how do your load your dataset ?",
"Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any\r\n\r\n> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir))",
"I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s.",
"@tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator\r\n\r\n@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to bottom, suggest PySmart https://pypi.org/project/pySMART/ a Smartmontools implementation",
"I wasn't able to reproduce this on a toy dataset of around 300GB:\r\n\r\n```python\r\nimport datasets as ds\r\n\r\ns = ds.load_dataset(\"squad\", split=\"train\")\r\ns4000 = ds.concatenate_datasets([s] * 4000)\r\nprint(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'\r\n\r\ns4000.save_to_disk(\"tmp/squad_4000\")\r\n```\r\n\r\n```python\r\nimport psutil\r\nimport time\r\nfrom datasets import load_from_disk\r\n\r\ndisk = \"disk0\" # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\n\r\ns4000_reloaded = load_from_disk(\"tmp/squad_4000\")\r\n\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\n\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```\r\n\r\nCould you run this on your side and tell me if how much time it takes ? Please run this when your machine is idle so that other processes don't interfere.\r\n\r\nI got these results on my macbook pro on datasets 1.6.2",
"@lhoestq thanks, test running as we speak, bear with me",
"Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab.",
"@lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ?",
"@lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds.",
"Okay, here’s the ouput:\r\nBlocks read 158396\r\nElapsed time: 529.10s\r\n\r\nAlso using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem?",
"@lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards.",
"The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.\r\n\r\nHere are three consecutive runs\r\nFirst run (freshly written to disk):\r\nBlocks read 309702\r\nElapsed time: 1267.74s\r\nSecond run (immediately after):\r\nBlocks read 113944\r\nElapsed time: 417.55s\r\nThird run (immediately after):\r\nBlocks read 42518\r\nElapsed time: 199.19s\r\n",
"@lhoestq \r\nFirst test\r\n> elapsed time: 11219.05s\r\n\r\nSecond test running bear with me, for Windows users slight trick to modify original \"disk0\" string:\r\n\r\nFirst find physical unit relevant key in dictionnary\r\n```\r\nimport psutil\r\npsutil.disk_io_counters(perdisk=True)\r\n```\r\n\r\n> {'PhysicalDrive0': sdiskio(read_count=18453286, write_count=4075333, read_bytes=479546467840, write_bytes=161590275072, read_time=20659, write_time=2464),\r\n> 'PhysicalDrive1': sdiskio(read_count=1495778, write_count=388781, read_bytes=548628622336, write_bytes=318234849280, read_time=426066, write_time=19085)}\r\n\r\nIn my case it's _PhysicalDrive1_\r\n\r\nThen insert relevant key's string as _disk_ variable\r\n\r\n```\r\npsutil.disk_io_counters()\r\ndisk = 'PhysicalDrive1' # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\ns4000_reloaded = load_from_disk(\"your path here\")\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```",
"@lhoestq\r\nSecond test\r\n\r\n> Blocks read 1265609\r\n> Elapsed time: 11216.55s",
"@lhoestq any luck ?",
"Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.\r\n\r\nI did some tests on google colab and have the same issue. The first time the dataset arrow file is memory mapped takes always a lot of time (time seems linear with respect to the dataset size). Reloading the dataset is then instantaneous since the arrow file has already been memory mapped.\r\n\r\nI also tried using the Arrow IPC file format (see #1933) instead of the current streaming format that we use but it didn't help.\r\n\r\nMemory mapping is handled by the OS and depends on the disk you're using, so I'm not sure we can do much about it. I'll continue to investigate anyway, because I still don't know why in some cases it would go through the entire file (high `Blocks read ` as in your tests) and in other cases it would do almost no reading.",
"@lhoestq thanks for the effort, let's stay in touch",
"Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` ",
"Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes.",
"Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue."
] | 1,619,165,900,000 | 1,637,776,195,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 517.96 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
model_backward | 0.26144 |100 | 26.144 | 5.0475 |
model_forward | 0.11123 |100 | 11.123 | 2.1474 |
get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 |
```
3) Running with 600GB, datasets==1.6.0
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 4563.2 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
get_train_batch | 5.1279 |100 | 512.79 | 11.237 |
model_backward | 4.8394 |100 | 483.94 | 10.605 |
model_forward | 0.12162 |100 | 12.162 | 0.26652 |
```
I see that `get_train_batch` lags when data is large. Could this be related to different issues?
I would be happy to provide necessary information to investigate. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2252/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/2252/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2251/comments | https://api.github.com/repos/huggingface/datasets/issues/2251/events | https://github.com/huggingface/datasets/issues/2251 | 865,848,705 | MDU6SXNzdWU4NjU4NDg3MDU= | 2,251 | while running run_qa.py, ran into a value error | {
"login": "nlee0212",
"id": 44570724,
"node_id": "MDQ6VXNlcjQ0NTcwNzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44570724?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nlee0212",
"html_url": "https://github.com/nlee0212",
"followers_url": "https://api.github.com/users/nlee0212/followers",
"following_url": "https://api.github.com/users/nlee0212/following{/other_user}",
"gists_url": "https://api.github.com/users/nlee0212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nlee0212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nlee0212/subscriptions",
"organizations_url": "https://api.github.com/users/nlee0212/orgs",
"repos_url": "https://api.github.com/users/nlee0212/repos",
"events_url": "https://api.github.com/users/nlee0212/events{/privacy}",
"received_events_url": "https://api.github.com/users/nlee0212/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [] | 1,619,164,263,000 | 1,619,164,263,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | command:
python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/
error:
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)}
with type
struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string>
but expected something like
{'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
with type
struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string>
I didn't encounter this error 4 hours ago. any solutions for this kind of issue?
looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2251/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2250/comments | https://api.github.com/repos/huggingface/datasets/issues/2250/events | https://github.com/huggingface/datasets/issues/2250 | 865,402,449 | MDU6SXNzdWU4NjU0MDI0NDk= | 2,250 | some issue in loading local txt file as Dataset for run_mlm.py | {
"login": "alighofrani95",
"id": 14968123,
"node_id": "MDQ6VXNlcjE0OTY4MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alighofrani95",
"html_url": "https://github.com/alighofrani95",
"followers_url": "https://api.github.com/users/alighofrani95/followers",
"following_url": "https://api.github.com/users/alighofrani95/following{/other_user}",
"gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions",
"organizations_url": "https://api.github.com/users/alighofrani95/orgs",
"repos_url": "https://api.github.com/users/alighofrani95/repos",
"events_url": "https://api.github.com/users/alighofrani95/events{/privacy}",
"received_events_url": "https://api.github.com/users/alighofrani95/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"Hi,\r\n\r\n1. try\r\n ```python\r\n dataset = load_dataset(\"text\", data_files={\"train\": [\"a1.txt\", \"b1.txt\"], \"test\": [\"c1.txt\"]})\r\n ```\r\n instead.\r\n\r\n Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the \r\n newest version (`pip install datasets --upgrade`).\r\n\r\n2. https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/examples/pytorch/language-modeling/run_mlm.py#L258-L259\r\nThis is the original code. You'll have to modify the example source to work with multiple train files. To make it easier, let's say \"|\" will act as a delimiter between files:\r\n ```python\r\n if data_args.train_file is not None:\r\n data_files[\"train\"] = data_args.train_file.split(\"|\") # + .split(\"|\")\r\n ```\r\n Then call the script as follows (**dataset_name must be None**):\r\n ```bash\r\n python run_mlm.py [... other args] --train_file a1.txt|b1.txt\r\n ```",
"i meet the same error with datasets 1.11.0, is there any insight about this?"
] | 1,619,120,353,000 | 1,629,258,552,000 | null | NONE | null | null | {
"url": "",
"html_url": "",
"diff_url": "",
"patch_url": "",
"merged_at": 0
} | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by removing one of the training .txt files It's fixed and although if I put all file as training it's ok


after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining.
by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs.
> Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
File "run_mlm.py", line 486, in <module>
main()
File "run_mlm.py", line 242, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py.
The file is also not present on the master branch on github.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2250/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2249/comments | https://api.github.com/repos/huggingface/datasets/issues/2249/events | https://github.com/huggingface/datasets/pull/2249 | 865,257,826 | MDExOlB1bGxSZXF1ZXN0NjIxMzU1MzE3 | 2,249 | Allow downloading/processing/caching only specific splits | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [
"> If you pass a dictionary like this:\r\n> \r\n> ```\r\n> {\"main_metadata\": url_to_main_data,\r\n> \"secondary_metadata\": url_to_sec_data,\r\n> \"train\": url_train_data,\r\n> \"test\": url_test_data}\r\n> ```\r\n> \r\n> then only the train or test keys will be kept, which I feel not intuitive.\r\n> \r\n> For example if the users asks to load the \"train\" split, then the main and secondary metadata won't be downloaded.\r\n> You can fix that by keeping all the keys except the splits to ignore\r\n\r\nHi @lhoestq, I have been thinking about this and I think it is worth that we discuss about it.\r\n\r\nWhen I created this PR, my first idea was to create a \"hack\" inside the download manager that will be able to filter some split(s) without touching any dataset script. Of course, the download manager does not know about splits logic, and thus this trick would only work for some very specific datasets: only the ones containing that pass a dict to the download manager containing only the keys \"train\", \"validation\", \"test\" (or the one passed by the user for advanced users knowing they can do it), e.g. the `natural_questions` dataset (which was one of the targets).\r\n\r\nThe big inconvenient of this approach is that it is not applicable to many datasets (or worse, it should be constantly tweaked to cope with exceptional cases). One exceptional case is the one you pointed out. But I see others:\r\n- the split keys can be different: train, test, dev, val, validation, eval,...\r\n- in `hope_edi` dataset, the split keys are: TRAIN_DOWNLOAD_URL, VALIDATION_DOWNLOAD_URL\r\n- in `few_rel` dataset, the split keys are: train_wiki, val_nyt, val_pubmed,..., pid2name\r\n- in `curiosity_dialogs`, the split keys are: train, val, test, test_zero; this means that for every split we pass, we will also get test_zero\r\n- in `deal_or_no_dialog`, each of the splits URL is passed separately to the download manager, so all splits would be always downloaded\r\n- etc.\r\n\r\nThen after discussing, another idea emerged: pass a `split` parameter to `_split_generators`, which know about the splits logic, so that it can handle which splits are passed to the download manager. This approach is more accurate and can be tweaked so that it works with all the datasets we want. The only inconvenient is that then for every target dataset, we must modify its corresponding `_split_generators` script method.\r\n\r\nMy point is that I don't think it is a good idea to implement both approaches. They could even interfere with each other! \r\n\r\nIf you agree, I would implement ONLY the second one, which is simpler, more consistent and stable and will avoid future problems.",
"Hi @albertvillanova !\r\nYup I agree with you, implementing the 2nd approach seems to be the right solution"
] | 1,619,113,904,000 | 1,630,560,811,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2249",
"html_url": "https://github.com/huggingface/datasets/pull/2249",
"diff_url": "https://github.com/huggingface/datasets/pull/2249.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2249.patch",
"merged_at": null
} | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2249/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2249/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2248/comments | https://api.github.com/repos/huggingface/datasets/issues/2248/events | https://github.com/huggingface/datasets/pull/2248 | 864,853,447 | MDExOlB1bGxSZXF1ZXN0NjIxMDEyNzg5 | 2,248 | Implement Dataset to JSON | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"id": 6644287,
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"title": "1.7",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 3,
"state": "closed",
"created_at": 1617974191000,
"updated_at": 1622478053000,
"due_on": 1620975600000,
"closed_at": 1622478053000
} | [] | 1,619,092,011,000 | 1,619,537,361,000 | 1,619,537,360,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2248",
"html_url": "https://github.com/huggingface/datasets/pull/2248",
"diff_url": "https://github.com/huggingface/datasets/pull/2248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2248.patch",
"merged_at": 1619537360000
} | Implement `Dataset.to_json`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2248/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2247/comments | https://api.github.com/repos/huggingface/datasets/issues/2247/events | https://github.com/huggingface/datasets/pull/2247 | 864,817,520 | MDExOlB1bGxSZXF1ZXN0NjIwOTgzNzY3 | 2,247 | Implement Dataset from Parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/7",
"html_url": "https://github.com/huggingface/datasets/milestone/7",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels",
"id": 6931350,
"node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==",
"number": 7,
"title": "1.11",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 2,
"state": "closed",
"created_at": 1625809740000,
"updated_at": 1630560843000,
"due_on": 1627628400000,
"closed_at": 1630560843000
} | [
"Hi @albertvillanova , I'll implement the parquet builder as an ArrowBasedBuilder if you don't mind",
"closing in favor of #2537 that is already merged"
] | 1,619,089,298,000 | 1,627,306,132,000 | 1,627,306,131,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2247",
"html_url": "https://github.com/huggingface/datasets/pull/2247",
"diff_url": "https://github.com/huggingface/datasets/pull/2247.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2247.patch",
"merged_at": null
} | Implement instantiation of Dataset from Parquet file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2247/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2246/comments | https://api.github.com/repos/huggingface/datasets/issues/2246/events | https://github.com/huggingface/datasets/pull/2246 | 864,220,031 | MDExOlB1bGxSZXF1ZXN0NjIwNDg3OTUw | 2,246 | Faster map w/ input_columns & faster slicing w/ Iterable keys | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq Just fixed the code style issues— I think it should be good to merge now :)"
] | 1,619,034,547,000 | 1,619,453,639,000 | 1,619,453,639,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2246",
"html_url": "https://github.com/huggingface/datasets/pull/2246",
"diff_url": "https://github.com/huggingface/datasets/pull/2246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2246.patch",
"merged_at": 1619453638000
} | @lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices.
Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2246/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2245/comments | https://api.github.com/repos/huggingface/datasets/issues/2245/events | https://github.com/huggingface/datasets/pull/2245 | 863,191,655 | MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3 | 2,245 | Add `key` type and duplicates verification with hashing | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "",
"html_url": "",
"labels_url": "",
"id": 0,
"node_id": "",
"number": 0,
"title": "",
"description": "",
"creator": {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 0,
"state": "",
"created_at": 0,
"updated_at": 0,
"due_on": 0,
"closed_at": 0
} | [
"@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n\r\nFAILURE TO GENERATE DATASET: Invalid key type detected\r\nFound Key [0, 0] of type <class 'list'>\r\nKeys should be either str, int or bytes type\r\n```\r\n\r\nIn the case of duplicate keys, it now gives:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\load.py\", line 746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 587, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 1002, in _prepare_split\r\n writer.write(example, key)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 321, in write\r\n self.check_duplicates()\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 331, in check_duplicates\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 234467\r\nKeys should be unique and deterministic in nature\r\n```\r\nPlease let me know if this is what we wanted to implement. Thanks!",
"This looks pretty cool !\r\nWe can make focus on the GeneratorBasedBuilder for now yes.\r\n\r\nDo you think we could make the ArrowWriter not look for duplicates by default ?\r\nThis way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now.",
"Thank you @lhoestq\r\n\r\n\r\n\r\n> Do you think we could make the ArrowWriter not look for duplicates by default ?\r\n\r\nWe can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`. \r\n\r\nHowever, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.\r\n\r\nNonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks!",
"I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.\r\nThis class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\nThat's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nAn alternative would be to subclass the writer to include duplicates detection in another class.\r\n\r\nBoth options are fine for me, let me know what you think !",
"> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\n> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nWell, that makes sense as the writer can indeed be used for other purposes as well.\r\n\r\n> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.\r\n\r\nI think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`). \r\n\r\nI will be adding the changes soon. Thanks for the feedback @lhoestq!",
"@lhoestq I have pushed the final changes just now. \r\nNow, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)\r\n\r\nLet me know if this is what was required. Thanks!",
"@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon. \r\n\r\nHowever, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks!",
"You can merge master into your branch to fix this issue.\r\nBasically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).\r\nSo until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently.",
"@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.\r\nWill be pushing the commit for the tests soon!",
"Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.\r\nI think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing). \r\nThese datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches. \r\n\r\nI'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks!",
"Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :)",
"Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :)",
"I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :)",
"@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :)",
"Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks!",
"Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch)",
"@lhoestq Thank you for the help and feedback. Feels great to contribute!"
] | 1,618,948,999,000 | 1,620,669,877,000 | 1,620,667,882,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2245",
"html_url": "https://github.com/huggingface/datasets/pull/2245",
"diff_url": "https://github.com/huggingface/datasets/pull/2245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2245.patch",
"merged_at": 1620667881000
} | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2245/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2244/comments | https://api.github.com/repos/huggingface/datasets/issues/2244/events | https://github.com/huggingface/datasets/pull/2244 | 863,029,946 | MDExOlB1bGxSZXF1ZXN0NjE5NTAyODc0 | 2,244 | Set specific cache directories per test function call | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "",
"id": 0,
"node_id": "",
"avatar_url": "",
"gravatar_id": "",
"url": "",
"html_url": "",
"followers_url": "",
"following_url": "",
"gists_url": "",
"starred_url": "",
"subscriptions_url": "",
"organizations_url": "",
"repos_url": "",
"events_url": "",
"received_events_url": "",
"type": "",
"site_admin": false
} | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [
"@lhoestq, I think this reaches some memory limit on Linux instances... (?)",
"It looks like the `comet` metric test fails because it tries to load a model in memory.\r\nIn the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.\r\nI can take a look tomorrow (this afternoon is the pytorch ecosystem day)",
"@lhoestq thanks for the hint: I'm going to have a look at that mock... ;)",
"@lhoestq finally I did not find out why the mock is not used... If you can give me some other hint tomorrow..."
] | 1,618,938,382,000 | 1,630,560,811,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2244",
"html_url": "https://github.com/huggingface/datasets/pull/2244",
"diff_url": "https://github.com/huggingface/datasets/pull/2244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2244.patch",
"merged_at": null
} | Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2244/timeline | null | true |
Subsets and Splits