url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
β | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2239/comments | https://api.github.com/repos/huggingface/datasets/issues/2239/events | https://github.com/huggingface/datasets/issues/2239 | 861,904,306 | MDU6SXNzdWU4NjE5MDQzMDY= | 2,239 | Error loading wikihow dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/odellus",
"id": 4686956,
"login": "odellus",
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"repos_url": "https://api.github.com/users/odellus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/odellus"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 4 | "2021-04-19T21:02:31Z" | "2021-04-20T16:33:11Z" | "2021-04-20T16:33:11Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2).
## Steps to reproduce the bug
I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use
```python
from datasets import load_dataset
dataset = load_dataset('wikihow')
```
to load the dataset. I do so and I get the message
```
AssertionError: The dataset wikihow with config all requires manual data.
Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset.
You need to download the following two files manually:
1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv
2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv
The <path/to/folder> can e.g. be "~/manual_wikihow_data".
Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`.
.
Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>')
```
So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory.
Then I run
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2)
## Expected results
I expected it to load the downloaded files into a dataset.
## Actual results
```python
Using custom data configuration default-data_dir=.%2Fwikihow
Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... ---------------------------------------------------------------------------
AttributeError
Traceback (most recent call last)
<ipython-input-9-5e4d40142f30> in <module>
----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow')
~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
745 try_from_hf_gcs=try_from_hf_gcs,
746 base_path=base_path,-->
747 use_auth_token=use_auth_token,
748 )
749
~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
577 if not downloaded_from_gcs:
578 self._download_and_prepare( -->
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
581 # Sync info
~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
632 split_dict = SplitDict(dataset_name=self.name)
633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -->
634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
635
636 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager)
132
133 path_to_manual_file = os.path.join(
--> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename
135 )
136
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
```
- Datasets: 1.5.0
- Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
- Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2239/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2238/comments | https://api.github.com/repos/huggingface/datasets/issues/2238/events | https://github.com/huggingface/datasets/pull/2238 | 861,518,291 | MDExOlB1bGxSZXF1ZXN0NjE4MTY5NzM5 | 2,238 | NLU evaluation data | {
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkajtoch",
"id": 32985207,
"login": "dkajtoch",
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkajtoch"
} | [] | closed | false | null | [] | null | 0 | "2021-04-19T16:47:20Z" | "2021-04-23T15:32:05Z" | "2021-04-23T15:32:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2238.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2238",
"merged_at": "2021-04-23T15:32:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2238.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2238"
} | New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2238/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2237/comments | https://api.github.com/repos/huggingface/datasets/issues/2237/events | https://github.com/huggingface/datasets/issues/2237 | 861,427,439 | MDU6SXNzdWU4NjE0Mjc0Mzk= | 2,237 | Update Dataset.dataset_size after transformed with map | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | "2021-04-19T15:19:38Z" | "2021-04-20T14:22:05Z" | null | MEMBER | null | null | null | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2237/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2236/comments | https://api.github.com/repos/huggingface/datasets/issues/2236/events | https://github.com/huggingface/datasets/issues/2236 | 861,388,145 | MDU6SXNzdWU4NjEzODgxNDU= | 2,236 | Request to add StrategyQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | 0 | "2021-04-19T14:46:26Z" | "2021-04-19T14:46:26Z" | null | NONE | null | null | null | ## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2236/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2235/comments | https://api.github.com/repos/huggingface/datasets/issues/2235/events | https://github.com/huggingface/datasets/pull/2235 | 861,040,716 | MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw | 2,235 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PierreColombo",
"id": 22492839,
"login": "PierreColombo",
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PierreColombo"
} | [] | closed | false | null | [] | null | 0 | "2021-04-19T08:21:02Z" | "2021-04-19T12:49:19Z" | "2021-04-19T12:49:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2235",
"merged_at": "2021-04-19T12:49:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2235"
} | Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2235/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2234/comments | https://api.github.com/repos/huggingface/datasets/issues/2234/events | https://github.com/huggingface/datasets/pull/2234 | 860,442,246 | MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3 | 2,234 | Fix bash snippet formatting in ADD_NEW_DATASET.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-04-17T16:01:08Z" | "2021-04-19T10:57:31Z" | "2021-04-19T07:51:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2234",
"merged_at": "2021-04-19T07:51:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2234"
} | This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2234/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2233/comments | https://api.github.com/repos/huggingface/datasets/issues/2233/events | https://github.com/huggingface/datasets/pull/2233 | 860,097,084 | MDExOlB1bGxSZXF1ZXN0NjE3MDYwMTkw | 2,233 | Fix `xnli` dataset tuple key | {
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NikhilBartwal",
"id": 42388668,
"login": "NikhilBartwal",
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NikhilBartwal"
} | [] | closed | false | null | [] | null | 0 | "2021-04-16T19:12:42Z" | "2021-04-19T08:56:42Z" | "2021-04-19T08:56:42Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2233",
"merged_at": "2021-04-19T08:56:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2233"
} | Closes #2229
The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int).
The key was thus ported to `str` keeping the original information intact. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2233/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2232/comments | https://api.github.com/repos/huggingface/datasets/issues/2232/events | https://github.com/huggingface/datasets/pull/2232 | 860,075,931 | MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4 | 2,232 | Start filling GLUE dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2021-04-16T18:37:37Z" | "2021-04-21T09:33:09Z" | "2021-04-21T09:33:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2232",
"merged_at": "2021-04-21T09:33:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2232"
} | The dataset card was pretty much empty.
I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2232/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2231/comments | https://api.github.com/repos/huggingface/datasets/issues/2231/events | https://github.com/huggingface/datasets/pull/2231 | 859,850,488 | MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx | 2,231 | Fix map when removing columns on a formatted dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-16T14:08:55Z" | "2021-04-16T15:10:05Z" | "2021-04-16T15:10:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2231",
"merged_at": "2021-04-16T15:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2231"
} | This should fix issue #2226
The `remove_columns` argument was ignored on formatted datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2230/comments | https://api.github.com/repos/huggingface/datasets/issues/2230/events | https://github.com/huggingface/datasets/issues/2230 | 859,817,159 | MDU6SXNzdWU4NTk4MTcxNTk= | 2,230 | Keys yielded while generating dataset are not being checked | {
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NikhilBartwal",
"id": 42388668,
"login": "NikhilBartwal",
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NikhilBartwal"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 9 | "2021-04-16T13:29:47Z" | "2021-05-10T17:31:21Z" | "2021-05-10T17:31:21Z" | CONTRIBUTOR | null | null | null | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 β 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 β 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 β 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 β 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2230/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2229/comments | https://api.github.com/repos/huggingface/datasets/issues/2229/events | https://github.com/huggingface/datasets/issues/2229 | 859,810,602 | MDU6SXNzdWU4NTk4MTA2MDI= | 2,229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NikhilBartwal",
"id": 42388668,
"login": "NikhilBartwal",
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NikhilBartwal"
} | [] | closed | false | null | [] | null | 2 | "2021-04-16T13:21:53Z" | "2021-04-19T08:56:42Z" | "2021-04-19T08:56:42Z" | CONTRIBUTOR | null | null | null | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset.
I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2229/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2228/comments | https://api.github.com/repos/huggingface/datasets/issues/2228/events | https://github.com/huggingface/datasets/pull/2228 | 859,795,563 | MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz | 2,228 | [WIP] Add ArrayXD support for fixed size list. | {
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"events_url": "https://api.github.com/users/jblemoine/events{/privacy}",
"followers_url": "https://api.github.com/users/jblemoine/followers",
"following_url": "https://api.github.com/users/jblemoine/following{/other_user}",
"gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jblemoine",
"id": 22685854,
"login": "jblemoine",
"node_id": "MDQ6VXNlcjIyNjg1ODU0",
"organizations_url": "https://api.github.com/users/jblemoine/orgs",
"received_events_url": "https://api.github.com/users/jblemoine/received_events",
"repos_url": "https://api.github.com/users/jblemoine/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jblemoine"
} | [] | open | false | null | [] | null | 1 | "2021-04-16T13:04:08Z" | "2022-07-06T15:19:48Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2228",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2228"
} | Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2228/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2227/comments | https://api.github.com/repos/huggingface/datasets/issues/2227/events | https://github.com/huggingface/datasets/pull/2227 | 859,771,526 | MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx | 2,227 | Use update_metadata_with_features decorator in class_encode_column method | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | 0 | "2021-04-16T12:31:41Z" | "2021-04-16T13:49:40Z" | "2021-04-16T13:49:39Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2227",
"merged_at": "2021-04-16T13:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2227"
} | Following @mariosasko 's comment | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2227/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2226/comments | https://api.github.com/repos/huggingface/datasets/issues/2226/events | https://github.com/huggingface/datasets/issues/2226 | 859,720,302 | MDU6SXNzdWU4NTk3MjAzMDI= | 2,226 | Batched map fails when removing all columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/villmow",
"id": 2743060,
"login": "villmow",
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"repos_url": "https://api.github.com/users/villmow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/villmow"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2021-04-16T11:17:01Z" | "2022-10-05T17:32:15Z" | "2022-10-05T17:32:15Z" | NONE | null | null | null | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
Here is my code: (see edit, in which I added a simplified version
```
This is the error:
```bash
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000
```
I wonder why this error occurs, when I delete every column? Can you give me a hint?
### Edit:
I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the
complete dataset and print every sample before calling map. There seems to be no other problem with the dataset.
I tried to simplify the code that crashes:
```python
# works
log.debug(dataset.column_names)
log.debug(dataset)
for i, sample in enumerate(dataset):
log.debug(i, sample)
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
)
```
```
pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000
```
Edit2:
May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:
```python
# crashes
counted_dataset = dataset.map(
lambda x: {"a": list(range(20))},
input_columns=column,
remove_columns=dataset.column_names,
load_from_cache_file=False,
num_proc=num_workers,
batched=True,
features=datasets.Features(
{
"a": datasets.Sequence(datasets.Value("int32"))
}
)
)
```
```
File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single
writer.write_batch(batch)
File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch
col_type = schema.field(col).type if schema is not None else None
File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field
KeyError: 'Column tokens does not exist in schema'
```
_Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2226/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2225/comments | https://api.github.com/repos/huggingface/datasets/issues/2225/events | https://github.com/huggingface/datasets/pull/2225 | 858,469,561 | MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4 | 2,225 | fixed one instance of 'train' to 'test' | {
"avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4",
"events_url": "https://api.github.com/users/alexwdong/events{/privacy}",
"followers_url": "https://api.github.com/users/alexwdong/followers",
"following_url": "https://api.github.com/users/alexwdong/following{/other_user}",
"gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexwdong",
"id": 46733535,
"login": "alexwdong",
"node_id": "MDQ6VXNlcjQ2NzMzNTM1",
"organizations_url": "https://api.github.com/users/alexwdong/orgs",
"received_events_url": "https://api.github.com/users/alexwdong/received_events",
"repos_url": "https://api.github.com/users/alexwdong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexwdong"
} | [] | closed | false | null | [] | null | 2 | "2021-04-15T04:26:40Z" | "2021-04-15T22:09:50Z" | "2021-04-15T21:19:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2225.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2225",
"merged_at": "2021-04-15T21:19:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2225.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2225"
} | I believe this should be 'test' instead of 'train' | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2225/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2224/comments | https://api.github.com/repos/huggingface/datasets/issues/2224/events | https://github.com/huggingface/datasets/issues/2224 | 857,983,361 | MDU6SXNzdWU4NTc5ODMzNjE= | 2,224 | Raise error if Windows max path length is not disabled | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | 0 | "2021-04-14T14:57:20Z" | "2021-04-14T14:59:13Z" | null | MEMBER | null | null | null | On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.
Linked to discussion in #2220. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2224/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2223/comments | https://api.github.com/repos/huggingface/datasets/issues/2223/events | https://github.com/huggingface/datasets/pull/2223 | 857,870,800 | MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz | 2,223 | Set test cache config | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 5 | "2021-04-14T12:55:24Z" | "2021-04-15T19:11:25Z" | "2021-04-15T19:11:25Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2223",
"merged_at": "2021-04-15T19:11:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2223"
} | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2223/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2222/comments | https://api.github.com/repos/huggingface/datasets/issues/2222/events | https://github.com/huggingface/datasets/pull/2222 | 857,847,231 | MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5 | 2,222 | Fix too long WindowsFileLock name | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | [] | null | 3 | "2021-04-14T12:26:52Z" | "2021-04-14T15:00:25Z" | "2021-04-14T14:46:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2222.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2222",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2222.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2222"
} | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2222/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2221/comments | https://api.github.com/repos/huggingface/datasets/issues/2221/events | https://github.com/huggingface/datasets/pull/2221 | 857,833,770 | MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5 | 2,221 | Add SLR70 - SLR80 and SLR86 to OpenSLR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | 0 | "2021-04-14T12:09:18Z" | "2021-04-14T13:50:19Z" | "2021-04-14T13:50:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2221",
"merged_at": "2021-04-14T13:50:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2221"
} | I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are:
Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2221/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2220/comments | https://api.github.com/repos/huggingface/datasets/issues/2220/events | https://github.com/huggingface/datasets/pull/2220 | 857,774,626 | MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz | 2,220 | Fix infinite loop in WindowsFileLock | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | [] | null | 4 | "2021-04-14T10:49:58Z" | "2021-04-14T14:59:50Z" | "2021-04-14T14:59:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2220",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2220"
} | Raise exception to avoid infinite loop. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2220/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2219/comments | https://api.github.com/repos/huggingface/datasets/issues/2219/events | https://github.com/huggingface/datasets/pull/2219 | 857,321,242 | MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3 | 2,219 | Added CUAD dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 3 | "2021-04-13T21:05:03Z" | "2021-04-24T14:25:51Z" | "2021-04-16T08:50:44Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2219",
"merged_at": "2021-04-16T08:50:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2219"
} | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2219/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2218/comments | https://api.github.com/repos/huggingface/datasets/issues/2218/events | https://github.com/huggingface/datasets/issues/2218 | 857,238,435 | MDU6SXNzdWU4NTcyMzg0MzU= | 2,218 | Duplicates in the LAMA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4",
"events_url": "https://api.github.com/users/amarasovic/events{/privacy}",
"followers_url": "https://api.github.com/users/amarasovic/followers",
"following_url": "https://api.github.com/users/amarasovic/following{/other_user}",
"gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amarasovic",
"id": 7276193,
"login": "amarasovic",
"node_id": "MDQ6VXNlcjcyNzYxOTM=",
"organizations_url": "https://api.github.com/users/amarasovic/orgs",
"received_events_url": "https://api.github.com/users/amarasovic/received_events",
"repos_url": "https://api.github.com/users/amarasovic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amarasovic"
} | [] | open | false | null | [] | null | 3 | "2021-04-13T18:59:49Z" | "2021-04-14T21:42:27Z" | null | NONE | null | null | null | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc)
>>> train_dataset = dataset['train']
>>> train_dataset[0]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi Κyl tΚΙΚy]; 12 March 1815 β 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
>>> train_dataset[1]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi Κyl tΚΙΚy]; 12 March 1815 β 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
```
I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from:
```
{"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]}
```
What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2218/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2217/comments | https://api.github.com/repos/huggingface/datasets/issues/2217/events | https://github.com/huggingface/datasets/pull/2217 | 857,011,314 | MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz | 2,217 | Revert breaking change in cache_files property | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-13T14:20:04Z" | "2021-04-14T14:24:24Z" | "2021-04-14T14:24:23Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"merged_at": "2021-04-14T14:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217"
} | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting the format back to a list of dicts:
```python
[{"filename": "path/to/file.arrow"}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2217/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2216/comments | https://api.github.com/repos/huggingface/datasets/issues/2216/events | https://github.com/huggingface/datasets/pull/2216 | 856,955,534 | MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1 | 2,216 | added real label for glue/mrpc to test set | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | 0 | "2021-04-13T13:20:20Z" | "2021-04-13T13:53:20Z" | "2021-04-13T13:53:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2216.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2216",
"merged_at": "2021-04-13T13:53:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2216.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2216"
} | Added real label to `glue.py` `mrpc` task for test split. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2216/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2215/comments | https://api.github.com/repos/huggingface/datasets/issues/2215/events | https://github.com/huggingface/datasets/pull/2215 | 856,716,791 | MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy | 2,215 | Add datasets SLR35 and SLR36 to OpenSLR | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | 4 | "2021-04-13T08:24:07Z" | "2021-04-13T14:05:14Z" | "2021-04-13T14:05:14Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2215.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2215",
"merged_at": "2021-04-13T14:05:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2215.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2215"
} | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2215/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2214/comments | https://api.github.com/repos/huggingface/datasets/issues/2214/events | https://github.com/huggingface/datasets/issues/2214 | 856,333,657 | MDU6SXNzdWU4NTYzMzM2NTc= | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | {
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsaphra",
"id": 414788,
"login": "nsaphra",
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"organizations_url": "https://api.github.com/users/nsaphra/orgs",
"received_events_url": "https://api.github.com/users/nsaphra/received_events",
"repos_url": "https://api.github.com/users/nsaphra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsaphra"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 4 | "2021-04-12T20:26:01Z" | "2021-04-23T15:20:02Z" | "2021-04-23T15:20:02Z" | NONE | null | null | null | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class
File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module>
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2214/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2213/comments | https://api.github.com/repos/huggingface/datasets/issues/2213/events | https://github.com/huggingface/datasets/pull/2213 | 856,025,320 | MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2 | 2,213 | Fix lc_quad download checksum | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-04-12T14:16:59Z" | "2021-04-14T22:04:54Z" | "2021-04-14T13:42:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2213",
"merged_at": "2021-04-14T13:42:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2213"
} | Fixes #2211 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2212/comments | https://api.github.com/repos/huggingface/datasets/issues/2212/events | https://github.com/huggingface/datasets/issues/2212 | 855,999,133 | MDU6SXNzdWU4NTU5OTkxMzM= | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n"
} | [] | closed | false | null | [] | null | 5 | "2021-04-12T13:49:56Z" | "2023-10-03T16:09:19Z" | "2023-10-03T16:09:18Z" | NONE | null | null | null | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2212/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2211/comments | https://api.github.com/repos/huggingface/datasets/issues/2211/events | https://github.com/huggingface/datasets/issues/2211 | 855,988,410 | MDU6SXNzdWU4NTU5ODg0MTA= | 2,211 | Getting checksum error when trying to load lc_quad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n"
} | [] | closed | false | null | [] | null | 2 | "2021-04-12T13:38:58Z" | "2021-04-14T13:42:25Z" | "2021-04-14T13:42:25Z" | NONE | null | null | null | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-42-404ace83f73c> in <module>()
----> 1 lc_quad = load_dataset("lc_quad")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip']
```
Does anyone know why this could be and how I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2211/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [] | closed | false | null | [] | null | 2 | "2021-04-12T08:33:02Z" | "2021-04-13T02:03:05Z" | "2021-04-13T02:03:05Z" | NONE | null | null | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2209/comments | https://api.github.com/repos/huggingface/datasets/issues/2209/events | https://github.com/huggingface/datasets/pull/2209 | 855,638,232 | MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2 | 2,209 | Add code of conduct to the project | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | "2021-04-12T07:16:14Z" | "2021-04-12T17:55:52Z" | "2021-04-12T17:55:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"merged_at": "2021-04-12T17:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2209"
} | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2209/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2208/comments | https://api.github.com/repos/huggingface/datasets/issues/2208/events | https://github.com/huggingface/datasets/pull/2208 | 855,343,835 | MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw | 2,208 | Remove Python2 leftovers | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 1 | "2021-04-11T16:08:03Z" | "2021-04-14T22:05:36Z" | "2021-04-14T13:40:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2208",
"merged_at": "2021-04-14T13:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2208"
} | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2208/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2207/comments | https://api.github.com/repos/huggingface/datasets/issues/2207/events | https://github.com/huggingface/datasets/issues/2207 | 855,267,383 | MDU6SXNzdWU4NTUyNjczODM= | 2,207 | making labels consistent across the datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 2 | "2021-04-11T10:03:56Z" | "2022-06-01T16:23:08Z" | "2022-06-01T16:21:10Z" | NONE | null | null | null | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,
it would be great to have the labels consistent.
thanks
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2207/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yana-xuyan",
"id": 38536635,
"login": "yana-xuyan",
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yana-xuyan"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 7 | "2021-04-11T08:40:09Z" | "2021-11-10T12:18:30Z" | "2021-11-10T12:04:28Z" | NONE | null | null | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2205/comments | https://api.github.com/repos/huggingface/datasets/issues/2205/events | https://github.com/huggingface/datasets/pull/2205 | 855,207,605 | MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw | 2,205 | Updating citation information on LinCE readme | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaguilar",
"id": 5833357,
"login": "gaguilar",
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaguilar"
} | [] | closed | false | null | [] | null | 0 | "2021-04-11T03:18:05Z" | "2021-04-12T17:53:34Z" | "2021-04-12T17:53:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2205",
"merged_at": "2021-04-12T17:53:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2205"
} | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2204/comments | https://api.github.com/repos/huggingface/datasets/issues/2204/events | https://github.com/huggingface/datasets/pull/2204 | 855,144,431 | MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2 | 2,204 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marrodion",
"id": 44571847,
"login": "marrodion",
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"repos_url": "https://api.github.com/users/marrodion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marrodion"
} | [] | closed | false | null | [] | null | 0 | "2021-04-10T19:58:19Z" | "2021-04-15T13:49:46Z" | "2021-04-15T13:49:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2204",
"merged_at": "2021-04-15T13:49:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2204"
} | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2204/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2203/comments | https://api.github.com/repos/huggingface/datasets/issues/2203/events | https://github.com/huggingface/datasets/pull/2203 | 855,053,595 | MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5 | 2,203 | updated banking77 train and test data | {
"avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4",
"events_url": "https://api.github.com/users/hsali/events{/privacy}",
"followers_url": "https://api.github.com/users/hsali/followers",
"following_url": "https://api.github.com/users/hsali/following{/other_user}",
"gists_url": "https://api.github.com/users/hsali/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hsali",
"id": 6765330,
"login": "hsali",
"node_id": "MDQ6VXNlcjY3NjUzMzA=",
"organizations_url": "https://api.github.com/users/hsali/orgs",
"received_events_url": "https://api.github.com/users/hsali/received_events",
"repos_url": "https://api.github.com/users/hsali/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsali/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hsali"
} | [] | closed | false | null | [] | null | 2 | "2021-04-10T12:10:10Z" | "2021-04-23T14:33:39Z" | "2021-04-23T14:33:39Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2203/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2202/comments | https://api.github.com/repos/huggingface/datasets/issues/2202/events | https://github.com/huggingface/datasets/pull/2202 | 854,501,109 | MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 0 | "2021-04-09T12:58:19Z" | "2021-04-12T17:58:00Z" | "2021-04-12T17:57:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"merged_at": "2021-04-12T17:57:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2202"
} | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2202/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-09T12:56:19Z" | "2021-04-12T13:32:17Z" | "2021-04-12T13:32:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201"
} | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2200/comments | https://api.github.com/repos/huggingface/datasets/issues/2200/events | https://github.com/huggingface/datasets/issues/2200 | 854,449,656 | MDU6SXNzdWU4NTQ0NDk2NTY= | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | {
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gforky",
"id": 4157614,
"login": "Gforky",
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"repos_url": "https://api.github.com/users/Gforky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gforky"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2021-04-09T11:47:13Z" | "2021-06-04T10:37:35Z" | "2021-06-04T10:37:35Z" | NONE | null | null | null | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if data_args.num_features:
features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")})
if data_args.label_classes:
features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(","))
else:
features["label"] = hf_features.Value("float32")
return hf_features.Features(features)
datasets = load_dataset(extension,
data_files=data_files,
sep=data_args.delimiter,
header=data_args.header,
column_names=data_args.column_names.split(",") if data_args.column_names else None,
features=get_dataset_features(data_args=data_args))
```
The `features` is printout as below before `builder_instance.as_dataset` is called:
```
{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
````
But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:
```
{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
```
After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`.
But `ArrowWriter` is initailized without passing `features`.
So my concern is:
It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2200/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2199/comments | https://api.github.com/repos/huggingface/datasets/issues/2199/events | https://github.com/huggingface/datasets/pull/2199 | 854,417,318 | MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2021-04-09T11:01:10Z" | "2021-04-09T15:57:05Z" | "2021-04-09T15:57:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"merged_at": "2021-04-09T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2199"
} | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2199/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2198/comments | https://api.github.com/repos/huggingface/datasets/issues/2198/events | https://github.com/huggingface/datasets/pull/2198 | 854,357,481 | MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz | 2,198 | added file_permission in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 1 | "2021-04-09T09:39:06Z" | "2021-04-16T14:11:46Z" | "2021-04-16T14:11:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2198.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2198",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2198.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2198"
} | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2198/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-09T09:37:57Z" | "2021-04-09T09:54:40Z" | "2021-04-09T09:54:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"merged_at": "2021-04-09T09:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197"
} | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2196/comments | https://api.github.com/repos/huggingface/datasets/issues/2196/events | https://github.com/huggingface/datasets/issues/2196 | 854,126,114 | MDU6SXNzdWU4NTQxMjYxMTQ= | 2,196 | `load_dataset` caches two arrow files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | 3 | "2021-04-09T03:49:19Z" | "2021-04-12T05:25:29Z" | "2021-04-12T05:25:29Z" | NONE | null | null | null | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2196/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2195/comments | https://api.github.com/repos/huggingface/datasets/issues/2195/events | https://github.com/huggingface/datasets/issues/2195 | 854,070,194 | MDU6SXNzdWU4NTQwNzAxOTQ= | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 2 | "2021-04-09T01:37:12Z" | "2021-04-09T09:55:09Z" | "2021-04-09T09:54:39Z" | NONE | null | null | null | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2195/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 1 | "2021-04-08T21:02:48Z" | "2021-04-09T16:56:50Z" | "2021-04-09T01:52:57Z" | CONTRIBUTOR | null | null | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/norabelrose",
"id": 39116809,
"login": "norabelrose",
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"type": "User",
"url": "https://api.github.com/users/norabelrose"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | 12 | "2021-04-08T18:16:14Z" | "2021-04-26T16:13:59Z" | "2021-04-26T16:13:59Z" | CONTRIBUTOR | null | null | null | I'm currently using the `wikipedia` datasetβ I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like thatβ I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2192/comments | https://api.github.com/repos/huggingface/datasets/issues/2192/events | https://github.com/huggingface/datasets/pull/2192 | 853,547,910 | MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0 | 2,192 | Fix typo in huggingface hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | 0 | "2021-04-08T14:42:24Z" | "2021-04-08T15:47:41Z" | "2021-04-08T15:47:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2192.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2192",
"merged_at": "2021-04-08T15:47:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2192.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2192"
} | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2192/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2191/comments | https://api.github.com/repos/huggingface/datasets/issues/2191/events | https://github.com/huggingface/datasets/pull/2191 | 853,364,204 | MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0 | 2,191 | Refactorize tests to use Dataset as context manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 4 | "2021-04-08T11:21:04Z" | "2021-04-19T07:53:11Z" | "2021-04-19T07:53:10Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"merged_at": "2021-04-19T07:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2191"
} | Refactorize Dataset tests to use Dataset as context manager. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2191/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2190/comments | https://api.github.com/repos/huggingface/datasets/issues/2190/events | https://github.com/huggingface/datasets/issues/2190 | 853,181,564 | MDU6SXNzdWU4NTMxODE1NjQ= | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anassalamah",
"id": 8571003,
"login": "anassalamah",
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anassalamah"
} | [] | closed | false | null | [] | null | 2 | "2021-04-08T07:53:43Z" | "2021-05-24T10:03:55Z" | "2021-05-24T10:03:55Z" | NONE | null | null | null | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2190/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 1 | "2021-04-08T04:42:53Z" | "2022-06-01T16:32:15Z" | "2022-06-01T16:32:15Z" | NONE | null | null | null | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thanh-p",
"id": 78190188,
"login": "thanh-p",
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"organizations_url": "https://api.github.com/users/thanh-p/orgs",
"received_events_url": "https://api.github.com/users/thanh-p/received_events",
"repos_url": "https://api.github.com/users/thanh-p/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thanh-p"
} | [] | closed | false | null | [] | null | 2 | "2021-04-08T04:21:54Z" | "2021-04-08T12:13:19Z" | "2021-04-08T12:13:19Z" | NONE | null | null | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2187/comments | https://api.github.com/repos/huggingface/datasets/issues/2187/events | https://github.com/huggingface/datasets/issues/2187 | 852,939,736 | MDU6SXNzdWU4NTI5Mzk3MzY= | 2,187 | Question (potential issue?) related to datasets caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | 15 | "2021-04-08T00:16:28Z" | "2023-01-03T18:30:38Z" | null | NONE | null | null | null | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2187/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2186/comments | https://api.github.com/repos/huggingface/datasets/issues/2186/events | https://github.com/huggingface/datasets/pull/2186 | 852,840,819 | MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0 | 2,186 | GEM: new challenge sets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 1 | "2021-04-07T21:39:07Z" | "2021-04-07T21:56:35Z" | "2021-04-07T21:56:35Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"merged_at": "2021-04-07T21:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2186"
} | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2186/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2185/comments | https://api.github.com/repos/huggingface/datasets/issues/2185/events | https://github.com/huggingface/datasets/issues/2185 | 852,684,395 | MDU6SXNzdWU4NTI2ODQzOTU= | 2,185 | .map() and distributed training | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [] | closed | false | null | [] | null | 8 | "2021-04-07T18:22:14Z" | "2021-10-23T07:11:15Z" | "2021-04-09T15:38:31Z" | MEMBER | null | null | null | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
)
```
I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split).
When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.
Everything so far was done by launching a **single process script**.
I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files.
I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.
**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.
- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case)
- I am using 1.5.0 version of datasets if that matters. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2185/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2184/comments | https://api.github.com/repos/huggingface/datasets/issues/2184/events | https://github.com/huggingface/datasets/pull/2184 | 852,597,258 | MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0 | 2,184 | Implementation of class_encode_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | 1 | "2021-04-07T16:47:43Z" | "2021-04-16T11:44:37Z" | "2021-04-16T11:26:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2184",
"merged_at": "2021-04-16T11:26:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2184"
} | Addresses #2176
I'm happy to discuss the API and internals! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2184/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2183/comments | https://api.github.com/repos/huggingface/datasets/issues/2183/events | https://github.com/huggingface/datasets/pull/2183 | 852,518,411 | MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz | 2,183 | Fix s3fs tests for py36 and py37+ | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-07T15:17:11Z" | "2021-04-08T08:54:45Z" | "2021-04-08T08:54:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"merged_at": "2021-04-08T08:54:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2183"
} | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`.
cc @philschmid | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2183/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2182/comments | https://api.github.com/repos/huggingface/datasets/issues/2182/events | https://github.com/huggingface/datasets/pull/2182 | 852,384,872 | MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy | 2,182 | Set default in-memory value depending on the dataset size | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 4 | "2021-04-07T13:00:18Z" | "2021-04-20T14:20:12Z" | "2021-04-20T10:04:04Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"merged_at": "2021-04-20T10:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2182"
} | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2182/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [] | closed | false | null | [] | null | 9 | "2021-04-07T10:26:46Z" | "2021-04-12T07:15:55Z" | "2021-04-12T07:15:55Z" | NONE | null | null | null | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance! | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-07T10:23:15Z" | "2021-04-07T15:50:35Z" | "2021-04-07T15:50:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"merged_at": "2021-04-07T15:50:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2180"
} | This should fix issue #2149 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 0 | "2021-04-07T09:58:16Z" | "2021-04-20T10:04:04Z" | "2021-04-20T10:04:03Z" | MEMBER | null | null | null | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk)
- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.
Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 3 | "2021-04-07T09:30:50Z" | "2021-04-20T14:20:44Z" | "2021-04-13T09:28:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"merged_at": "2021-04-13T09:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178"
} | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2177/comments | https://api.github.com/repos/huggingface/datasets/issues/2177/events | https://github.com/huggingface/datasets/pull/2177 | 852,065,307 | MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx | 2,177 | add social thumbnial | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | 0 | "2021-04-07T06:40:06Z" | "2021-04-07T08:16:01Z" | "2021-04-07T08:16:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2177",
"merged_at": "2021-04-07T08:16:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2177"
} | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main).
P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2177/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2176/comments | https://api.github.com/repos/huggingface/datasets/issues/2176/events | https://github.com/huggingface/datasets/issues/2176 | 851,865,795 | MDU6SXNzdWU4NTE4NjU3OTU= | 2,176 | Converting a Value to a ClassLabel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 2 | "2021-04-06T22:54:16Z" | "2022-06-01T16:31:49Z" | "2022-06-01T16:31:49Z" | NONE | null | null | null | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2176/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 6 | "2021-04-06T21:50:49Z" | "2021-04-16T12:21:16Z" | "2021-04-16T12:21:15Z" | NONE | null | null | null | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker.

Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?
Is this a problem of the index, where the faiss can't find any similar vector?
Is there documentation on the output index being -1?
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2174/comments | https://api.github.com/repos/huggingface/datasets/issues/2174/events | https://github.com/huggingface/datasets/pull/2174 | 851,383,675 | MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2 | 2,174 | Pin docutils for better doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2021-04-06T12:40:20Z" | "2021-04-06T12:55:53Z" | "2021-04-06T12:55:53Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"merged_at": "2021-04-06T12:55:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2174"
} | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx).
You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2174/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2173/comments | https://api.github.com/repos/huggingface/datasets/issues/2173/events | https://github.com/huggingface/datasets/pull/2173 | 851,359,284 | MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2 | 2,173 | Add OpenSLR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | 0 | "2021-04-06T12:08:34Z" | "2021-04-12T16:54:46Z" | "2021-04-12T16:54:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2173",
"merged_at": "2021-04-12T16:54:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2173"
} | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2173/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2172/comments | https://api.github.com/repos/huggingface/datasets/issues/2172/events | https://github.com/huggingface/datasets/pull/2172 | 851,229,399 | MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx | 2,172 | Pin fsspec lower than 0.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-04-06T09:19:09Z" | "2021-04-06T09:49:27Z" | "2021-04-06T09:49:26Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2172.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2172",
"merged_at": "2021-04-06T09:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2172.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2172"
} | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2172/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2171/comments | https://api.github.com/repos/huggingface/datasets/issues/2171/events | https://github.com/huggingface/datasets/pull/2171 | 851,090,662 | MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw | 2,171 | Fixed the link to wikiauto training data. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mounicam",
"id": 11708999,
"login": "mounicam",
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"repos_url": "https://api.github.com/users/mounicam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mounicam"
} | [] | closed | false | null | [] | null | 3 | "2021-04-06T07:13:11Z" | "2021-04-06T16:05:42Z" | "2021-04-06T16:05:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2171",
"merged_at": "2021-04-06T16:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2171"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2171/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2170/comments | https://api.github.com/repos/huggingface/datasets/issues/2170/events | https://github.com/huggingface/datasets/issues/2170 | 850,913,228 | MDU6SXNzdWU4NTA5MTMyMjg= | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | {
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"events_url": "https://api.github.com/users/leezu/events{/privacy}",
"followers_url": "https://api.github.com/users/leezu/followers",
"following_url": "https://api.github.com/users/leezu/following{/other_user}",
"gists_url": "https://api.github.com/users/leezu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leezu",
"id": 946903,
"login": "leezu",
"node_id": "MDQ6VXNlcjk0NjkwMw==",
"organizations_url": "https://api.github.com/users/leezu/orgs",
"received_events_url": "https://api.github.com/users/leezu/received_events",
"repos_url": "https://api.github.com/users/leezu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leezu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leezu"
} | [] | open | false | null | [] | null | 1 | "2021-04-06T03:13:18Z" | "2021-06-16T01:10:50Z" | null | NONE | null | null | null | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ 02-Mar-2021 01:25 -
20210201/ 21-Mar-2021 01:26 -
20210220/ 02-Apr-2021 01:26 -
20210301/ 03-Mar-2021 08:10 -
20210320/ 21-Mar-2021 18:13 -
20210401/ 03-Apr-2021 10:08 -
latest/ 03-Apr-2021 10:08 -
```
However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets:
```
ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
```
The cached datasets:
```
% aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/
PRE 20200501.de/
PRE 20200501.en/
PRE 20200501.fr/
PRE 20200501.frr/
PRE 20200501.it/
PRE 20200501.simple/
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2170/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2169/comments | https://api.github.com/repos/huggingface/datasets/issues/2169/events | https://github.com/huggingface/datasets/pull/2169 | 850,456,180 | MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz | 2,169 | Updated WER metric implementation to avoid memory issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diego-fustes",
"id": 5707233,
"login": "diego-fustes",
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diego-fustes"
} | [] | closed | false | null | [] | null | 1 | "2021-04-05T15:43:20Z" | "2021-04-06T15:02:58Z" | "2021-04-06T15:02:58Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2169"
} | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2169/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2168/comments | https://api.github.com/repos/huggingface/datasets/issues/2168/events | https://github.com/huggingface/datasets/pull/2168 | 849,957,941 | MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5 | 2,168 | Preserve split type when realoding dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 5 | "2021-04-04T20:46:21Z" | "2021-04-19T10:57:05Z" | "2021-04-19T09:08:55Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2168",
"merged_at": "2021-04-19T09:08:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2168"
} | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2168/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2167/comments | https://api.github.com/repos/huggingface/datasets/issues/2167/events | https://github.com/huggingface/datasets/issues/2167 | 849,944,891 | MDU6SXNzdWU4NDk5NDQ4OTE= | 2,167 | Split type not preserved when reloading the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-04-04T19:29:54Z" | "2021-04-19T09:08:55Z" | "2021-04-19T09:08:55Z" | CONTRIBUTOR | null | null | null | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<class 'str'>
```
It seems like this bug was introduced in #2025. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2167/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2166/comments | https://api.github.com/repos/huggingface/datasets/issues/2166/events | https://github.com/huggingface/datasets/issues/2166 | 849,778,545 | MDU6SXNzdWU4NDk3Nzg1NDU= | 2,166 | Regarding Test Sets for the GEM datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vyraun",
"id": 17217068,
"login": "vyraun",
"node_id": "MDQ6VXNlcjE3MjE3MDY4",
"organizations_url": "https://api.github.com/users/vyraun/orgs",
"received_events_url": "https://api.github.com/users/vyraun/received_events",
"repos_url": "https://api.github.com/users/vyraun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyraun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vyraun"
} | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | [] | null | 2 | "2021-04-04T02:02:45Z" | "2021-04-06T08:13:12Z" | "2021-04-06T08:13:12Z" | NONE | null | null | null | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test'][0]
{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2166/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2165/comments | https://api.github.com/repos/huggingface/datasets/issues/2165/events | https://github.com/huggingface/datasets/issues/2165 | 849,771,665 | MDU6SXNzdWU4NDk3NzE2NjU= | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/y-rokutan",
"id": 24562381,
"login": "y-rokutan",
"node_id": "MDQ6VXNlcjI0NTYyMzgx",
"organizations_url": "https://api.github.com/users/y-rokutan/orgs",
"received_events_url": "https://api.github.com/users/y-rokutan/received_events",
"repos_url": "https://api.github.com/users/y-rokutan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/y-rokutan"
} | [] | closed | false | null | [] | null | 7 | "2021-04-04T01:01:48Z" | "2021-08-24T15:55:35Z" | "2021-04-07T15:06:04Z" | NONE | null | null | null | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=[p for p in model.parameters() if p.requires_grad],
training_data=train_ds)
```
but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2165/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2164/comments | https://api.github.com/repos/huggingface/datasets/issues/2164/events | https://github.com/huggingface/datasets/pull/2164 | 849,739,759 | MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-04-03T21:07:02Z" | "2021-04-06T14:41:09Z" | "2021-04-06T14:41:08Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164"
} | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2164/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2163/comments | https://api.github.com/repos/huggingface/datasets/issues/2163/events | https://github.com/huggingface/datasets/pull/2163 | 849,669,366 | MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz | 2,163 | Concat only unique fields in DatasetInfo.from_merge | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2021-04-03T14:31:30Z" | "2021-04-06T14:40:00Z" | "2021-04-06T14:39:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2163",
"merged_at": "2021-04-06T14:39:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2163"
} | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2163/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2162/comments | https://api.github.com/repos/huggingface/datasets/issues/2162/events | https://github.com/huggingface/datasets/issues/2162 | 849,129,201 | MDU6SXNzdWU4NDkxMjkyMDE= | 2,162 | visualization for cc100 is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 3 | "2021-04-02T10:11:13Z" | "2022-10-05T13:20:24Z" | "2022-10-05T13:20:24Z" | NONE | null | null | null | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2162/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 6 | "2021-04-02T10:06:46Z" | "2022-10-05T13:26:51Z" | "2022-10-05T13:26:51Z" | NONE | null | null | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2160/comments | https://api.github.com/repos/huggingface/datasets/issues/2160/events | https://github.com/huggingface/datasets/issues/2160 | 849,052,921 | MDU6SXNzdWU4NDkwNTI5MjE= | 2,160 | data_args.preprocessing_num_workers almost freezes | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 2 | "2021-04-02T07:56:13Z" | "2021-04-02T10:14:32Z" | "2021-04-02T10:14:31Z" | NONE | null | null | null | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up.
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2160/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | "2021-04-01T23:28:36Z" | "2021-04-02T10:05:19Z" | "2021-04-02T10:05:19Z" | NONE | null | null | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2158/comments | https://api.github.com/repos/huggingface/datasets/issues/2158/events | https://github.com/huggingface/datasets/issues/2158 | 848,506,746 | MDU6SXNzdWU4NDg1MDY3NDY= | 2,158 | viewer "fake_news_english" error | {
"avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4",
"events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}",
"followers_url": "https://api.github.com/users/emanuelevivoli/followers",
"following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}",
"gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emanuelevivoli",
"id": 9447991,
"login": "emanuelevivoli",
"node_id": "MDQ6VXNlcjk0NDc5OTE=",
"organizations_url": "https://api.github.com/users/emanuelevivoli/orgs",
"received_events_url": "https://api.github.com/users/emanuelevivoli/received_events",
"repos_url": "https://api.github.com/users/emanuelevivoli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emanuelevivoli"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | "2021-04-01T14:13:20Z" | "2022-10-05T13:22:02Z" | "2022-10-05T13:22:02Z" | NONE | null | null | null | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance'
as well as the error Traceback.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2158/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2157/comments | https://api.github.com/repos/huggingface/datasets/issues/2157/events | https://github.com/huggingface/datasets/pull/2157 | 847,205,239 | MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx | 2,157 | updated user permissions based on umask | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 0 | "2021-03-31T19:38:29Z" | "2021-04-06T07:19:19Z" | "2021-04-06T07:19:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2157",
"merged_at": "2021-04-06T07:19:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2157"
} | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2157/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2156/comments | https://api.github.com/repos/huggingface/datasets/issues/2156/events | https://github.com/huggingface/datasets/pull/2156 | 847,198,295 | MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky | 2,156 | User permissions | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 0 | "2021-03-31T19:33:48Z" | "2021-03-31T19:34:24Z" | "2021-03-31T19:34:24Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2156.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2156",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2156.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2156"
} | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2156/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2155/comments | https://api.github.com/repos/huggingface/datasets/issues/2155/events | https://github.com/huggingface/datasets/pull/2155 | 846,786,897 | MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4 | 2,155 | Add table classes to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2021-03-31T14:36:10Z" | "2021-04-01T16:46:30Z" | "2021-03-31T15:42:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2155",
"merged_at": "2021-03-31T15:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2155"
} | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2155/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2154/comments | https://api.github.com/repos/huggingface/datasets/issues/2154/events | https://github.com/huggingface/datasets/pull/2154 | 846,763,960 | MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | {
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/versae",
"id": 173537,
"login": "versae",
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"organizations_url": "https://api.github.com/users/versae/orgs",
"received_events_url": "https://api.github.com/users/versae/received_events",
"repos_url": "https://api.github.com/users/versae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/versae"
} | [] | closed | false | null | [] | null | 1 | "2021-03-31T14:22:50Z" | "2021-04-01T09:27:00Z" | "2021-04-01T09:16:08Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"merged_at": "2021-04-01T09:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2154"
} | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (BokmΓ₯l and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
See #1720. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2154/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2153/comments | https://api.github.com/repos/huggingface/datasets/issues/2153/events | https://github.com/huggingface/datasets/issues/2153 | 846,181,502 | MDU6SXNzdWU4NDYxODE1MDI= | 2,153 | load_dataset ignoring features | {
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GuillemGSubies",
"id": 37592763,
"login": "GuillemGSubies",
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GuillemGSubies"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2021-03-31T08:30:09Z" | "2022-10-05T13:29:12Z" | "2022-10-05T13:29:12Z" | NONE | null | null | null | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work.
Code to reproduce:
```python
import datasets
data_location = "/data/prueba_multiclase"
features = datasets.Features(
{"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])}
)
dataset = datasets.load_dataset(
"csv", data_files=data_location, delimiter="\t", features=features
)
```
Dataset I used:
[prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped)
Thank you! β€οΈ
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2153/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2152/comments | https://api.github.com/repos/huggingface/datasets/issues/2152/events | https://github.com/huggingface/datasets/pull/2152 | 845,751,273 | MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz | 2,152 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [] | closed | false | null | [] | null | 0 | "2021-03-31T03:21:19Z" | "2021-04-01T10:20:37Z" | "2021-04-01T10:20:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2152",
"merged_at": "2021-04-01T10:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2152"
} | Updated some descriptions of Wino_Bias dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2152/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2151/comments | https://api.github.com/repos/huggingface/datasets/issues/2151/events | https://github.com/huggingface/datasets/pull/2151 | 844,886,081 | MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw | 2,151 | Add support for axis in concatenate datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 5 | "2021-03-30T16:58:44Z" | "2021-06-23T17:41:02Z" | "2021-04-19T16:07:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"merged_at": "2021-04-19T16:07:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2151"
} | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2151/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2150/comments | https://api.github.com/repos/huggingface/datasets/issues/2150/events | https://github.com/huggingface/datasets/pull/2150 | 844,776,448 | MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx | 2,150 | Allow pickling of big in-memory tables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-30T15:51:56Z" | "2021-03-31T10:37:15Z" | "2021-03-31T10:37:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2150.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2150",
"merged_at": "2021-03-31T10:37:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2150.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2150"
} | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2150/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2149/comments | https://api.github.com/repos/huggingface/datasets/issues/2149/events | https://github.com/huggingface/datasets/issues/2149 | 844,734,076 | MDU6SXNzdWU4NDQ3MzQwNzY= | 2,149 | Telugu subset missing for xtreme tatoeba dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jerryIsHere",
"id": 50871412,
"login": "jerryIsHere",
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jerryIsHere"
} | [] | closed | false | null | [] | null | 2 | "2021-03-30T15:26:34Z" | "2022-10-05T13:28:30Z" | "2022-10-05T13:28:30Z" | CONTRIBUTOR | null | null | null | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict = {
'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn',
'deu':'de', 'ell':'el', 'spa':'es', 'est':'et',
'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr',
'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id',
'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka',
'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr',
'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw',
'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here
'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh',
'eng':'en',
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2149/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2148/comments | https://api.github.com/repos/huggingface/datasets/issues/2148/events | https://github.com/huggingface/datasets/issues/2148 | 844,700,910 | MDU6SXNzdWU4NDQ3MDA5MTA= | 2,148 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marrodion",
"id": 44571847,
"login": "marrodion",
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"repos_url": "https://api.github.com/users/marrodion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marrodion"
} | [] | closed | false | null | [] | null | 1 | "2021-03-30T15:04:06Z" | "2021-04-15T13:49:46Z" | "2021-04-15T13:49:46Z" | CONTRIBUTOR | null | null | null | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute`
https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109
Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches.
The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases.
It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation.
If that makes sense, I am happy to implement the change. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2148/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2147/comments | https://api.github.com/repos/huggingface/datasets/issues/2147/events | https://github.com/huggingface/datasets/pull/2147 | 844,687,831 | MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4 | 2,147 | Render docstring return type as inline | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | "2021-03-30T14:55:43Z" | "2021-03-31T13:11:05Z" | "2021-03-31T13:11:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2147.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2147",
"merged_at": "2021-03-31T13:11:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2147.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2147"
} | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2147/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2146/comments | https://api.github.com/repos/huggingface/datasets/issues/2146/events | https://github.com/huggingface/datasets/issues/2146 | 844,673,244 | MDU6SXNzdWU4NDQ2NzMyNDQ= | 2,146 | Dataset file size on disk is very large with 3D Array | {
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"events_url": "https://api.github.com/users/jblemoine/events{/privacy}",
"followers_url": "https://api.github.com/users/jblemoine/followers",
"following_url": "https://api.github.com/users/jblemoine/following{/other_user}",
"gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jblemoine",
"id": 22685854,
"login": "jblemoine",
"node_id": "MDQ6VXNlcjIyNjg1ODU0",
"organizations_url": "https://api.github.com/users/jblemoine/orgs",
"received_events_url": "https://api.github.com/users/jblemoine/received_events",
"repos_url": "https://api.github.com/users/jblemoine/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jblemoine"
} | [] | open | false | null | [] | null | 6 | "2021-03-30T14:46:09Z" | "2021-04-16T13:07:02Z" | null | NONE | null | null | null | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"shape": [224, 224, 3],
"dtype": "uint8",
"id": null,
"_type": "Array3D",
}
},
"post_processed": null,
"supervised_keys": null,
"builder_name": "shot_type_image_dataset",
"config_name": "default",
"version": {
"version_str": "0.0.0",
"description": null,
"major": 0,
"minor": 0,
"patch": 0,
},
"splits": {
"train": {
"name": "train",
"num_bytes": 520803408,
"num_examples": 1479,
"dataset_name": "shot_type_image_dataset",
}
},
"download_checksums": {
"": {
"num_bytes": 16940447118,
"checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03",
}
},
"download_size": 16940447118,
"post_processing_size": null,
"dataset_size": 520803408,
"size_in_bytes": 17461250526,
}`
I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk.
I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records.
This might be a problem for large dataset.
Thanks for your help.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2146/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2145/comments | https://api.github.com/repos/huggingface/datasets/issues/2145/events | https://github.com/huggingface/datasets/pull/2145 | 844,603,518 | MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2 | 2,145 | Implement Dataset add_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
} | 1 | "2021-03-30T14:02:14Z" | "2021-04-29T14:50:44Z" | "2021-04-29T14:50:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2145"
} | Implement `Dataset.add_column`.
Close #1954. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2145/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | {
"avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4",
"events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}",
"followers_url": "https://api.github.com/users/TomPyonsuke/followers",
"following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}",
"gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomPyonsuke",
"id": 26637405,
"login": "TomPyonsuke",
"node_id": "MDQ6VXNlcjI2NjM3NDA1",
"organizations_url": "https://api.github.com/users/TomPyonsuke/orgs",
"received_events_url": "https://api.github.com/users/TomPyonsuke/received_events",
"repos_url": "https://api.github.com/users/TomPyonsuke/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomPyonsuke"
} | [] | open | false | null | [] | null | 6 | "2021-03-30T10:38:31Z" | "2021-04-01T09:21:17Z" | null | NONE | null | null | null | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 14.6k/14.6k [00:00<00:00, 5.41MB/s]
Downloading: 59%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 10.7G/18.3G [11:30<08:08, 15.5MB/s]
Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.
Traceback (most recent call last):
File "load_wiki.py", line 2, in <module>
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset
map_tuple=True,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected to be able to read 9176784 bytes for message body, got 4918712
**Detailed version info**
datasets==1.5.0
- dataclasses [required: Any, installed: 0.8]
- dill [required: Any, installed: 0.3.3]
- fsspec [required: Any, installed: 0.8.7]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- huggingface-hub [required: <0.1.0, installed: 0.0.7]
- filelock [required: Any, installed: 3.0.12]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- requests [required: Any, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: Any, installed: 4.49.0]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- multiprocess [required: Any, installed: 0.70.11.1]
- dill [required: >=0.3.3, installed: 0.3.3]
- numpy [required: >=1.17, installed: 1.17.0]
- pandas [required: Any, installed: 1.1.5]
- numpy [required: >=1.15.4, installed: 1.17.0]
- python-dateutil [required: >=2.7.3, installed: 2.8.0]
- six [required: >=1.5, installed: 1.15.0]
- pytz [required: >=2017.2, installed: 2020.1]
- pyarrow [required: >=0.17.1, installed: 3.0.0]
- numpy [required: >=1.16.6, installed: 1.17.0]
- requests [required: >=2.19.0, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]
- xxhash [required: Any, installed: 2.0.0]
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2143/comments | https://api.github.com/repos/huggingface/datasets/issues/2143/events | https://github.com/huggingface/datasets/pull/2143 | 844,313,228 | MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0 | 2,143 | task casting via load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
] | null | 0 | "2021-03-30T10:00:42Z" | "2021-06-11T13:20:41Z" | "2021-06-11T13:20:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2143.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2143",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2143.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2143"
} | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2143/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2142/comments | https://api.github.com/repos/huggingface/datasets/issues/2142/events | https://github.com/huggingface/datasets/pull/2142 | 843,919,420 | MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy | 2,142 | Gem V1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 0 | "2021-03-29T23:47:02Z" | "2021-03-30T00:10:02Z" | "2021-03-30T00:10:02Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2142.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2142",
"merged_at": "2021-03-30T00:10:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2142.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2142"
} | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2142/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2141/comments | https://api.github.com/repos/huggingface/datasets/issues/2141/events | https://github.com/huggingface/datasets/pull/2141 | 843,914,790 | MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw | 2,141 | added spans field for the wikiann datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | 3 | "2021-03-29T23:38:26Z" | "2021-03-31T13:27:50Z" | "2021-03-31T13:27:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2141.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2141",
"merged_at": "2021-03-31T13:27:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2141.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2141"
} | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2141/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2140/comments | https://api.github.com/repos/huggingface/datasets/issues/2140/events | https://github.com/huggingface/datasets/pull/2140 | 843,830,451 | MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx | 2,140 | add banking77 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkajtoch",
"id": 32985207,
"login": "dkajtoch",
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkajtoch"
} | [] | closed | false | null | [] | null | 1 | "2021-03-29T21:32:23Z" | "2021-04-09T09:32:18Z" | "2021-04-09T09:32:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2140",
"merged_at": "2021-04-09T09:32:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2140"
} | Intent classification/detection dataset from banking category with 77 unique intents. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2140/timeline | null | null | true |
Subsets and Splits