url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/723/comments | https://api.github.com/repos/huggingface/datasets/issues/723/events | https://github.com/huggingface/datasets/issues/723 | 718,926,723 | MDU6SXNzdWU3MTg5MjY3MjM= | 723 | Adding pseudo-labels to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
}
] | null | 8 | "2020-10-11T21:05:45Z" | "2021-08-03T05:11:51Z" | "2021-08-03T05:11:51Z" | CONTRIBUTOR | null | null | null | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/723/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | 3 | "2020-10-10T19:44:08Z" | "2022-09-30T14:53:37Z" | "2022-09-30T14:53:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722"
} | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/721/comments | https://api.github.com/repos/huggingface/datasets/issues/721/events | https://github.com/huggingface/datasets/issues/721 | 718,647,147 | MDU6SXNzdWU3MTg2NDcxNDc= | 721 | feat(dl_manager): add support for ftp downloads | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 11 | "2020-10-10T15:50:20Z" | "2022-02-15T10:44:44Z" | "2022-02-15T10:44:43Z" | CONTRIBUTOR | null | null | null | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/721/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/720/comments | https://api.github.com/repos/huggingface/datasets/issues/720/events | https://github.com/huggingface/datasets/issues/720 | 716,581,266 | MDU6SXNzdWU3MTY1ODEyNjY= | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | {
"avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4",
"events_url": "https://api.github.com/users/josemlopez/events{/privacy}",
"followers_url": "https://api.github.com/users/josemlopez/followers",
"following_url": "https://api.github.com/users/josemlopez/following{/other_user}",
"gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/josemlopez",
"id": 4112135,
"login": "josemlopez",
"node_id": "MDQ6VXNlcjQxMTIxMzU=",
"organizations_url": "https://api.github.com/users/josemlopez/orgs",
"received_events_url": "https://api.github.com/users/josemlopez/received_events",
"repos_url": "https://api.github.com/users/josemlopez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/josemlopez"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2020-10-07T14:27:13Z" | "2020-12-23T14:04:31Z" | "2020-12-23T14:04:31Z" | NONE | null | null | null | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/720/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/719/comments | https://api.github.com/repos/huggingface/datasets/issues/719/events | https://github.com/huggingface/datasets/pull/719 | 716,492,263 | MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2 | 719 | Fix train_test_split output format | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-07T12:39:01Z" | "2020-10-07T13:38:08Z" | "2020-10-07T13:38:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/719.diff",
"html_url": "https://github.com/huggingface/datasets/pull/719",
"merged_at": "2020-10-07T13:38:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/719.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/719"
} | There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.
This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).
This should fix @timothyjlaurent 's issue in #620 and fix #676
I added tests for `transmit_format` so that it doesn't happen again | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/719/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/718/comments | https://api.github.com/repos/huggingface/datasets/issues/718/events | https://github.com/huggingface/datasets/pull/718 | 715,694,709 | MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw | 718 | Don't use tqdm 4.50.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-06T13:45:53Z" | "2020-10-06T13:49:24Z" | "2020-10-06T13:49:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/718",
"merged_at": "2020-10-06T13:49:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/718"
} | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/718/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/717/comments | https://api.github.com/repos/huggingface/datasets/issues/717/events | https://github.com/huggingface/datasets/pull/717 | 714,959,268 | MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2 | 717 | Fixes #712 Error in the Overview.ipynb notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subhrm",
"id": 850012,
"login": "subhrm",
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"repos_url": "https://api.github.com/users/subhrm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subhrm"
} | [] | closed | false | null | [] | null | 0 | "2020-10-05T15:50:41Z" | "2020-10-06T06:31:43Z" | "2020-10-05T16:25:41Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/717.diff",
"html_url": "https://github.com/huggingface/datasets/pull/717",
"merged_at": "2020-10-05T16:25:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/717.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/717"
} | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/717/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/716/comments | https://api.github.com/repos/huggingface/datasets/issues/716/events | https://github.com/huggingface/datasets/pull/716 | 714,952,888 | MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subhrm",
"id": 850012,
"login": "subhrm",
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"repos_url": "https://api.github.com/users/subhrm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subhrm"
} | [] | closed | false | null | [] | null | 1 | "2020-10-05T15:42:09Z" | "2020-10-05T15:46:38Z" | "2020-10-05T15:46:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/716.diff",
"html_url": "https://github.com/huggingface/datasets/pull/716",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/716.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/716"
} | Fixes the Attribute error in cell 3 of the overview notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/716/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/715/comments | https://api.github.com/repos/huggingface/datasets/issues/715/events | https://github.com/huggingface/datasets/pull/715 | 714,690,192 | MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2 | 715 | Use python read for text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 7 | "2020-10-05T09:47:55Z" | "2020-10-05T13:13:18Z" | "2020-10-05T13:13:17Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/715",
"merged_at": "2020-10-05T13:13:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/715"
} | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/715/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/714/comments | https://api.github.com/repos/huggingface/datasets/issues/714/events | https://github.com/huggingface/datasets/pull/714 | 714,487,881 | MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx | 714 | Add the official dependabot implementation | {
"avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4",
"events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}",
"followers_url": "https://api.github.com/users/ALazyMeme/followers",
"following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}",
"gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ALazyMeme",
"id": 12804673,
"login": "ALazyMeme",
"node_id": "MDQ6VXNlcjEyODA0Njcz",
"organizations_url": "https://api.github.com/users/ALazyMeme/orgs",
"received_events_url": "https://api.github.com/users/ALazyMeme/received_events",
"repos_url": "https://api.github.com/users/ALazyMeme/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ALazyMeme"
} | [] | closed | false | null | [] | null | 0 | "2020-10-05T03:49:45Z" | "2020-10-12T11:49:21Z" | "2020-10-12T11:49:21Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/714",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/714"
} | This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/714/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/713/comments | https://api.github.com/repos/huggingface/datasets/issues/713/events | https://github.com/huggingface/datasets/pull/713 | 714,475,732 | MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy | 713 | Fix reading text files with carriage return symbols | {
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mozharovsky",
"id": 6762769,
"login": "mozharovsky",
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mozharovsky"
} | [] | closed | false | null | [] | null | 1 | "2020-10-05T03:07:03Z" | "2020-10-09T05:58:25Z" | "2020-10-05T13:49:29Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/713",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/713"
} | The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`).
It fails with the following error message:
```
...
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
```
___
I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine.
Please, consider this PR as it seems to be a common issue to solve. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/713/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/712/comments | https://api.github.com/repos/huggingface/datasets/issues/712/events | https://github.com/huggingface/datasets/issues/712 | 714,242,316 | MDU6SXNzdWU3MTQyNDIzMTY= | 712 | Error in the notebooks/Overview.ipynb notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/subhrm",
"id": 850012,
"login": "subhrm",
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"repos_url": "https://api.github.com/users/subhrm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/subhrm"
} | [] | closed | false | null | [] | null | 2 | "2020-10-04T05:58:31Z" | "2020-10-05T16:25:40Z" | "2020-10-05T16:25:40Z" | CONTRIBUTOR | null | null | null | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/712/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/711/comments | https://api.github.com/repos/huggingface/datasets/issues/711/events | https://github.com/huggingface/datasets/pull/711 | 714,236,408 | MDExOlB1bGxSZXF1ZXN0NDk3Mzc3NzU3 | 711 | New Update bertscore.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/51692618?v=4",
"events_url": "https://api.github.com/users/PassionateLooker/events{/privacy}",
"followers_url": "https://api.github.com/users/PassionateLooker/followers",
"following_url": "https://api.github.com/users/PassionateLooker/following{/other_user}",
"gists_url": "https://api.github.com/users/PassionateLooker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PassionateLooker",
"id": 51692618,
"login": "PassionateLooker",
"node_id": "MDQ6VXNlcjUxNjkyNjE4",
"organizations_url": "https://api.github.com/users/PassionateLooker/orgs",
"received_events_url": "https://api.github.com/users/PassionateLooker/received_events",
"repos_url": "https://api.github.com/users/PassionateLooker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PassionateLooker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PassionateLooker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PassionateLooker"
} | [] | closed | false | null | [] | null | 0 | "2020-10-04T05:13:09Z" | "2020-10-05T16:26:51Z" | "2020-10-05T16:26:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/711",
"merged_at": "2020-10-05T16:26:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/711"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/711/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/711/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/710/comments | https://api.github.com/repos/huggingface/datasets/issues/710/events | https://github.com/huggingface/datasets/pull/710 | 714,186,999 | MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0 | 710 | fix README typos/ consistency | {
"avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4",
"events_url": "https://api.github.com/users/discdiver/events{/privacy}",
"followers_url": "https://api.github.com/users/discdiver/followers",
"following_url": "https://api.github.com/users/discdiver/following{/other_user}",
"gists_url": "https://api.github.com/users/discdiver/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/discdiver",
"id": 7703961,
"login": "discdiver",
"node_id": "MDQ6VXNlcjc3MDM5NjE=",
"organizations_url": "https://api.github.com/users/discdiver/orgs",
"received_events_url": "https://api.github.com/users/discdiver/received_events",
"repos_url": "https://api.github.com/users/discdiver/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/discdiver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/discdiver/subscriptions",
"type": "User",
"url": "https://api.github.com/users/discdiver"
} | [] | closed | false | null | [] | null | 0 | "2020-10-03T22:20:56Z" | "2020-10-17T09:52:45Z" | "2020-10-17T09:52:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/710",
"merged_at": "2020-10-17T09:52:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/710"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/710/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/709/comments | https://api.github.com/repos/huggingface/datasets/issues/709/events | https://github.com/huggingface/datasets/issues/709 | 714,067,902 | MDU6SXNzdWU3MTQwNjc5MDI= | 709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nsankar",
"id": 431890,
"login": "nsankar",
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"repos_url": "https://api.github.com/users/nsankar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nsankar"
} | [] | closed | false | null | [] | null | 1 | "2020-10-03T11:18:49Z" | "2022-10-04T17:19:37Z" | "2022-10-04T17:19:37Z" | NONE | null | null | null | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/709/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/708/comments | https://api.github.com/repos/huggingface/datasets/issues/708/events | https://github.com/huggingface/datasets/issues/708 | 714,020,953 | MDU6SXNzdWU3MTQwMjA5NTM= | 708 | Datasets performance slow? - 6.4x slower than in memory dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4",
"events_url": "https://api.github.com/users/eugeneware/events{/privacy}",
"followers_url": "https://api.github.com/users/eugeneware/followers",
"following_url": "https://api.github.com/users/eugeneware/following{/other_user}",
"gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eugeneware",
"id": 38154,
"login": "eugeneware",
"node_id": "MDQ6VXNlcjM4MTU0",
"organizations_url": "https://api.github.com/users/eugeneware/orgs",
"received_events_url": "https://api.github.com/users/eugeneware/received_events",
"repos_url": "https://api.github.com/users/eugeneware/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eugeneware"
} | [] | closed | false | null | [] | null | 10 | "2020-10-03T06:44:07Z" | "2021-02-12T14:13:28Z" | "2021-02-12T14:13:28Z" | NONE | null | null | null | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/708/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/707/comments | https://api.github.com/repos/huggingface/datasets/issues/707/events | https://github.com/huggingface/datasets/issues/707 | 713,954,666 | MDU6SXNzdWU3MTM5NTQ2NjY= | 707 | Requirements should specify pyarrow<1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4",
"events_url": "https://api.github.com/users/mathcass/events{/privacy}",
"followers_url": "https://api.github.com/users/mathcass/followers",
"following_url": "https://api.github.com/users/mathcass/following{/other_user}",
"gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mathcass",
"id": 918541,
"login": "mathcass",
"node_id": "MDQ6VXNlcjkxODU0MQ==",
"organizations_url": "https://api.github.com/users/mathcass/orgs",
"received_events_url": "https://api.github.com/users/mathcass/received_events",
"repos_url": "https://api.github.com/users/mathcass/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathcass/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mathcass"
} | [] | closed | false | null | [] | null | 7 | "2020-10-02T23:39:39Z" | "2020-12-04T08:22:39Z" | "2020-10-04T20:50:28Z" | NONE | null | null | null | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file.
https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68
Downgrading by installing `pip install "pyarrow<1"` resolved the issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/707/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/706/comments | https://api.github.com/repos/huggingface/datasets/issues/706/events | https://github.com/huggingface/datasets/pull/706 | 713,721,959 | MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0 | 706 | Fix config creation for data files with NamedSplit | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-02T15:46:49Z" | "2020-10-05T08:15:00Z" | "2020-10-05T08:14:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/706",
"merged_at": "2020-10-05T08:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/706"
} | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead.
Fix #705 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/706/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/705/comments | https://api.github.com/repos/huggingface/datasets/issues/705/events | https://github.com/huggingface/datasets/issues/705 | 713,709,100 | MDU6SXNzdWU3MTM3MDkxMDA= | 705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | {
"avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4",
"events_url": "https://api.github.com/users/pvcastro/events{/privacy}",
"followers_url": "https://api.github.com/users/pvcastro/followers",
"following_url": "https://api.github.com/users/pvcastro/following{/other_user}",
"gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pvcastro",
"id": 12713359,
"login": "pvcastro",
"node_id": "MDQ6VXNlcjEyNzEzMzU5",
"organizations_url": "https://api.github.com/users/pvcastro/orgs",
"received_events_url": "https://api.github.com/users/pvcastro/received_events",
"repos_url": "https://api.github.com/users/pvcastro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pvcastro"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2020-10-02T15:27:55Z" | "2020-10-05T08:14:59Z" | "2020-10-05T08:14:59Z" | NONE | null | null | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/705/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/704/comments | https://api.github.com/repos/huggingface/datasets/issues/704/events | https://github.com/huggingface/datasets/pull/704 | 713,572,556 | MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0 | 704 | Fix remote tests for new datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-02T12:08:04Z" | "2020-10-02T12:12:02Z" | "2020-10-02T12:12:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/704",
"merged_at": "2020-10-02T12:12:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/704"
} | When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)
To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/704/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/704/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/703/comments | https://api.github.com/repos/huggingface/datasets/issues/703/events | https://github.com/huggingface/datasets/pull/703 | 713,559,718 | MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5 | 703 | Add hotpot QA | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | 5 | "2020-10-02T11:44:28Z" | "2020-10-02T12:54:41Z" | "2020-10-02T12:54:41Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/703.diff",
"html_url": "https://github.com/huggingface/datasets/pull/703",
"merged_at": "2020-10-02T12:54:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/703.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/703"
} | Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/703/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/702/comments | https://api.github.com/repos/huggingface/datasets/issues/702/events | https://github.com/huggingface/datasets/pull/702 | 713,499,628 | MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4 | 702 | Complete rouge kwargs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-02T09:59:01Z" | "2020-10-02T10:11:04Z" | "2020-10-02T10:11:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/702.diff",
"html_url": "https://github.com/huggingface/datasets/pull/702",
"merged_at": "2020-10-02T10:11:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/702.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/702"
} | In #701 we noticed that some kwargs were missing for rouge | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/702/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/702/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/701/comments | https://api.github.com/repos/huggingface/datasets/issues/701/events | https://github.com/huggingface/datasets/pull/701 | 713,485,757 | MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1 | 701 | Add rouge 2 and rouge Lsum to rouge metric outputs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-10-02T09:35:46Z" | "2020-10-02T09:55:14Z" | "2020-10-02T09:52:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/701.diff",
"html_url": "https://github.com/huggingface/datasets/pull/701",
"merged_at": "2020-10-02T09:52:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/701.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/701"
} | Continuation of #700
Rouge 2 and Rouge Lsum were missing in Rouge's outputs.
Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n`
Fix #617 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/701/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/701/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/700/comments | https://api.github.com/repos/huggingface/datasets/issues/700/events | https://github.com/huggingface/datasets/pull/700 | 713,450,295 | MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz | 700 | Add rouge-2 in rouge_types for metric calculation | {
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Shashi456",
"id": 18056781,
"login": "Shashi456",
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Shashi456"
} | [] | closed | false | null | [] | null | 13 | "2020-10-02T08:36:45Z" | "2020-10-02T11:08:49Z" | "2020-10-02T09:59:05Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/700",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/700"
} | The description of the ROUGE metric says,
```
_KWARGS_DESCRIPTION = """
Calculates average rouge scores for a list of hypotheses and references
Args:
predictions: list of predictions to score. Each predictions
should be a string with tokens separated by spaces.
references: list of reference for each prediction. Each
reference should be a string with tokens separated by spaces.
Returns:
rouge1: rouge_1 f1,
rouge2: rouge_2 f1,
rougeL: rouge_l f1,
rougeLsum: rouge_l precision
"""
```
but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/700/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/699/comments | https://api.github.com/repos/huggingface/datasets/issues/699/events | https://github.com/huggingface/datasets/issues/699 | 713,395,642 | MDU6SXNzdWU3MTMzOTU2NDI= | 699 | XNLI dataset is not loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}",
"followers_url": "https://api.github.com/users/imadarsh1001/followers",
"following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}",
"gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/imadarsh1001",
"id": 14936525,
"login": "imadarsh1001",
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"organizations_url": "https://api.github.com/users/imadarsh1001/orgs",
"received_events_url": "https://api.github.com/users/imadarsh1001/received_events",
"repos_url": "https://api.github.com/users/imadarsh1001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/imadarsh1001"
} | [] | closed | false | null | [] | null | 3 | "2020-10-02T06:53:16Z" | "2020-10-03T17:45:52Z" | "2020-10-03T17:43:37Z" | NONE | null | null | null | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/699/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/697/comments | https://api.github.com/repos/huggingface/datasets/issues/697/events | https://github.com/huggingface/datasets/pull/697 | 712,979,029 | MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5 | 697 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4",
"events_url": "https://api.github.com/users/bishug/events{/privacy}",
"followers_url": "https://api.github.com/users/bishug/followers",
"following_url": "https://api.github.com/users/bishug/following{/other_user}",
"gists_url": "https://api.github.com/users/bishug/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bishug",
"id": 71011306,
"login": "bishug",
"node_id": "MDQ6VXNlcjcxMDExMzA2",
"organizations_url": "https://api.github.com/users/bishug/orgs",
"received_events_url": "https://api.github.com/users/bishug/received_events",
"repos_url": "https://api.github.com/users/bishug/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bishug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bishug/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bishug"
} | [] | closed | false | null | [] | null | 0 | "2020-10-01T16:02:42Z" | "2020-10-01T16:12:00Z" | "2020-10-01T16:12:00Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/697.diff",
"html_url": "https://github.com/huggingface/datasets/pull/697",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/697.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/697"
} | Hey I was just telling my subscribers to check out your repositories
Thank you | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/697/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/696/comments | https://api.github.com/repos/huggingface/datasets/issues/696/events | https://github.com/huggingface/datasets/pull/696 | 712,942,977 | MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy | 696 | Elasticsearch index docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-01T15:18:58Z" | "2020-10-02T07:48:19Z" | "2020-10-02T07:48:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/696",
"merged_at": "2020-10-02T07:48:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/696"
} | I added the docs for ES indexes.
I also added a `load_elasticsearch_index` method to load an index that has already been built.
I checked the tests for the ES index and we have tests that mock ElasticSearch.
I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/696/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/695/comments | https://api.github.com/repos/huggingface/datasets/issues/695/events | https://github.com/huggingface/datasets/pull/695 | 712,843,949 | MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0 | 695 | Update XNLI download link | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-01T13:27:22Z" | "2020-10-01T14:01:15Z" | "2020-10-01T14:01:14Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/695",
"merged_at": "2020-10-01T14:01:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/695"
} | The old link isn't working anymore. I updated it with the new official link.
Fix #690 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/695/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/694/comments | https://api.github.com/repos/huggingface/datasets/issues/694/events | https://github.com/huggingface/datasets/pull/694 | 712,827,751 | MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0 | 694 | Use GitHub instead of aws in remote dataset tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-01T13:07:50Z" | "2020-10-02T07:47:28Z" | "2020-10-02T07:47:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/694.diff",
"html_url": "https://github.com/huggingface/datasets/pull/694",
"merged_at": "2020-10-02T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/694.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/694"
} | Recently we switched from aws s3 to github to download dataset scripts.
However in the tests, the dummy data were still downloaded from s3.
So I changed that to download them from github instead, in the MockDownloadManager.
Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/694/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/693/comments | https://api.github.com/repos/huggingface/datasets/issues/693/events | https://github.com/huggingface/datasets/pull/693 | 712,822,200 | MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw | 693 | Rachel ker add dataset/mlsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4",
"events_url": "https://api.github.com/users/pdhg/events{/privacy}",
"followers_url": "https://api.github.com/users/pdhg/followers",
"following_url": "https://api.github.com/users/pdhg/following{/other_user}",
"gists_url": "https://api.github.com/users/pdhg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pdhg",
"id": 32742136,
"login": "pdhg",
"node_id": "MDQ6VXNlcjMyNzQyMTM2",
"organizations_url": "https://api.github.com/users/pdhg/orgs",
"received_events_url": "https://api.github.com/users/pdhg/received_events",
"repos_url": "https://api.github.com/users/pdhg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pdhg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdhg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pdhg"
} | [] | closed | false | null | [] | null | 1 | "2020-10-01T13:01:10Z" | "2023-09-24T09:48:23Z" | "2020-10-01T17:01:13Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/693",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/693"
} | . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/693/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/692/comments | https://api.github.com/repos/huggingface/datasets/issues/692/events | https://github.com/huggingface/datasets/pull/692 | 712,818,968 | MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw | 692 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/62796466?v=4",
"events_url": "https://api.github.com/users/mayank1897/events{/privacy}",
"followers_url": "https://api.github.com/users/mayank1897/followers",
"following_url": "https://api.github.com/users/mayank1897/following{/other_user}",
"gists_url": "https://api.github.com/users/mayank1897/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mayank1897",
"id": 62796466,
"login": "mayank1897",
"node_id": "MDQ6VXNlcjYyNzk2NDY2",
"organizations_url": "https://api.github.com/users/mayank1897/orgs",
"received_events_url": "https://api.github.com/users/mayank1897/received_events",
"repos_url": "https://api.github.com/users/mayank1897/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mayank1897/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayank1897/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mayank1897"
} | [] | closed | false | null | [] | null | 4 | "2020-10-01T12:57:22Z" | "2020-10-02T11:01:59Z" | "2020-10-02T11:01:59Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/692",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/692"
} | {
"+1": 0,
"-1": 4,
"confused": 2,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/692/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/691/comments | https://api.github.com/repos/huggingface/datasets/issues/691/events | https://github.com/huggingface/datasets/issues/691 | 712,389,499 | MDU6SXNzdWU3MTIzODk0OTk= | 691 | Add UI filter to filter datasets based on task | {
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}",
"followers_url": "https://api.github.com/users/praateekmahajan/followers",
"following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}",
"gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/praateekmahajan",
"id": 7589415,
"login": "praateekmahajan",
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"organizations_url": "https://api.github.com/users/praateekmahajan/orgs",
"received_events_url": "https://api.github.com/users/praateekmahajan/received_events",
"repos_url": "https://api.github.com/users/praateekmahajan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/praateekmahajan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | "2020-10-01T00:56:18Z" | "2022-02-15T10:46:50Z" | "2022-02-15T10:46:50Z" | NONE | null | null | null | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :) | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/691/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/690/comments | https://api.github.com/repos/huggingface/datasets/issues/690/events | https://github.com/huggingface/datasets/issues/690 | 712,150,321 | MDU6SXNzdWU3MTIxNTAzMjE= | 690 | XNLI dataset: NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"events_url": "https://api.github.com/users/xiey1/events{/privacy}",
"followers_url": "https://api.github.com/users/xiey1/followers",
"following_url": "https://api.github.com/users/xiey1/following{/other_user}",
"gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xiey1",
"id": 13307358,
"login": "xiey1",
"node_id": "MDQ6VXNlcjEzMzA3MzU4",
"organizations_url": "https://api.github.com/users/xiey1/orgs",
"received_events_url": "https://api.github.com/users/xiey1/received_events",
"repos_url": "https://api.github.com/users/xiey1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiey1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xiey1"
} | [] | closed | false | null | [] | null | 5 | "2020-09-30T17:50:03Z" | "2020-10-01T17:15:08Z" | "2020-10-01T14:01:14Z" | NONE | null | null | null | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/690/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/689/comments | https://api.github.com/repos/huggingface/datasets/issues/689/events | https://github.com/huggingface/datasets/pull/689 | 712,095,262 | MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy | 689 | Switch to pandas reader for text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-09-30T16:28:12Z" | "2020-09-30T16:45:32Z" | "2020-09-30T16:45:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/689.diff",
"html_url": "https://github.com/huggingface/datasets/pull/689",
"merged_at": "2020-09-30T16:45:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/689.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/689"
} | Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator.
In this PR I switched to pandas to read the file.
Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-691672919)
From a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/689/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/688/comments | https://api.github.com/repos/huggingface/datasets/issues/688/events | https://github.com/huggingface/datasets/pull/688 | 711,804,828 | MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1 | 688 | Disable tokenizers parallelism in multiprocessed map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-30T09:53:34Z" | "2020-10-01T08:45:46Z" | "2020-10-01T08:45:45Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/688.diff",
"html_url": "https://github.com/huggingface/datasets/pull/688",
"merged_at": "2020-10-01T08:45:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/688.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/688"
} | It was reported in #620 that using multiprocessing with a tokenizers shows this message:
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
```
This message is shown when TOKENIZERS_PARALLELISM is unset.
Moreover if it is set to `true`, then the program just hangs.
To hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value.
Also I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`:
```
Setting TOKENIZERS_PARALLELISM=false for forked processes.
```
cc @n1t0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/688/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/687/comments | https://api.github.com/repos/huggingface/datasets/issues/687/events | https://github.com/huggingface/datasets/issues/687 | 711,664,810 | MDU6SXNzdWU3MTE2NjQ4MTA= | 687 | `ArrowInvalid` occurs while running `Dataset.map()` function | {
"avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4",
"events_url": "https://api.github.com/users/peinan/events{/privacy}",
"followers_url": "https://api.github.com/users/peinan/followers",
"following_url": "https://api.github.com/users/peinan/following{/other_user}",
"gists_url": "https://api.github.com/users/peinan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/peinan",
"id": 5601012,
"login": "peinan",
"node_id": "MDQ6VXNlcjU2MDEwMTI=",
"organizations_url": "https://api.github.com/users/peinan/orgs",
"received_events_url": "https://api.github.com/users/peinan/received_events",
"repos_url": "https://api.github.com/users/peinan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peinan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/peinan"
} | [] | closed | false | null | [] | null | 2 | "2020-09-30T06:16:50Z" | "2020-09-30T09:53:03Z" | "2020-09-30T09:53:03Z" | NONE | null | null | null | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
# suggested in #665
class PicklableTokenizer(BertJapaneseTokenizer):
def __getstate__(self):
state = dict(self.__dict__)
state['do_lower_case'] = self.word_tokenizer.do_lower_case
state['never_split'] = self.word_tokenizer.never_split
del state['word_tokenizer']
return state
def __setstate(self):
do_lower_case = state.pop('do_lower_case')
never_split = state.pop('never_split')
self.__dict__ = state
self.word_tokenizer = MecabTokenizer(
do_lower_case=do_lower_case, never_split=never_split
)
t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')
encoded = train_ds.map(
lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000
)
```
Error Message:
```
99% 99/100 [00:22<00:00, 39.07ba/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1496 if update_data:
1497 batch = cast_to_python_objects(batch)
-> 1498 writer.write_batch(batch)
1499 if update_data:
1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
272 typed_sequence_examples[col] = typed_sequence
--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)
274 self.write_table(pa_table)
275
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
/usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/687/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/686/comments | https://api.github.com/repos/huggingface/datasets/issues/686/events | https://github.com/huggingface/datasets/issues/686 | 711,385,739 | MDU6SXNzdWU3MTEzODU3Mzk= | 686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
} | [] | closed | false | null | [] | null | 2 | "2020-09-29T19:21:52Z" | "2021-01-08T18:29:26Z" | "2021-01-08T18:29:26Z" | CONTRIBUTOR | null | null | null | Might be worth updating to https://huggingface.co/datasets/viewer/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/686/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/685/comments | https://api.github.com/repos/huggingface/datasets/issues/685/events | https://github.com/huggingface/datasets/pull/685 | 711,182,185 | MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz | 685 | Add features parameter to CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-29T14:43:36Z" | "2020-09-30T08:39:56Z" | "2020-09-30T08:39:54Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/685",
"merged_at": "2020-09-30T08:39:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/685"
} | Add support for the `features` parameter when loading a csv dataset:
```python
from datasets import load_dataset, Features
features = Features({...})
csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features)
```
I added tests to make sure that it is also compatible with the caching system
Fix #623 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/685/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/684/comments | https://api.github.com/repos/huggingface/datasets/issues/684/events | https://github.com/huggingface/datasets/pull/684 | 711,080,947 | MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1 | 684 | Fix column order issue in cast | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-29T12:49:13Z" | "2020-09-29T15:56:46Z" | "2020-09-29T15:56:45Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/684",
"merged_at": "2020-09-29T15:56:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/684"
} | Previously, the order of the columns in the features passes to `cast_` mattered.
However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order.
This issue was reported by @lewtun in #623
To fix that I fixed the schema to follow the order of the arrow table columns.
I also added the possibility to give features that are not ordered the same way as the dataset features. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/684/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/684/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/683/comments | https://api.github.com/repos/huggingface/datasets/issues/683/events | https://github.com/huggingface/datasets/pull/683 | 710,942,704 | MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1 | 683 | Fix wrong delimiter in text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-29T09:43:24Z" | "2021-05-05T18:24:31Z" | "2020-09-29T09:44:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/683",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/683"
} | The delimiter is set to the bell character as it is used nowhere is text files usually.
However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`.
I replace \b by \a
Hopefully it fixes issues mentioned by some users in #622 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/683/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/682/comments | https://api.github.com/repos/huggingface/datasets/issues/682/events | https://github.com/huggingface/datasets/pull/682 | 710,325,399 | MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw | 682 | Update navbar chapter titles color | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-28T14:35:17Z" | "2020-09-28T17:30:13Z" | "2020-09-28T17:30:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/682",
"merged_at": "2020-09-28T17:30:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/682"
} | Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423
It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.
see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/682/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/681/comments | https://api.github.com/repos/huggingface/datasets/issues/681/events | https://github.com/huggingface/datasets/pull/681 | 710,075,721 | MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz | 681 | Adding missing @property (+2 small flake8 fixes). | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
} | [] | closed | false | null | [] | null | 0 | "2020-09-28T08:53:53Z" | "2020-09-28T10:26:13Z" | "2020-09-28T10:26:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/681.diff",
"html_url": "https://github.com/huggingface/datasets/pull/681",
"merged_at": "2020-09-28T10:26:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/681.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/681"
} | Fixes #678 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/681/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/680/comments | https://api.github.com/repos/huggingface/datasets/issues/680/events | https://github.com/huggingface/datasets/pull/680 | 710,066,138 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4 | 680 | Fix bug related to boolean in GAP dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/14996977?v=4",
"events_url": "https://api.github.com/users/otakumesi/events{/privacy}",
"followers_url": "https://api.github.com/users/otakumesi/followers",
"following_url": "https://api.github.com/users/otakumesi/following{/other_user}",
"gists_url": "https://api.github.com/users/otakumesi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/otakumesi",
"id": 14996977,
"login": "otakumesi",
"node_id": "MDQ6VXNlcjE0OTk2OTc3",
"organizations_url": "https://api.github.com/users/otakumesi/orgs",
"received_events_url": "https://api.github.com/users/otakumesi/received_events",
"repos_url": "https://api.github.com/users/otakumesi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/otakumesi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/otakumesi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/otakumesi"
} | [] | closed | false | null | [] | null | 2 | "2020-09-28T08:39:39Z" | "2020-09-29T15:54:47Z" | "2020-09-29T15:54:47Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/680.diff",
"html_url": "https://github.com/huggingface/datasets/pull/680",
"merged_at": "2020-09-29T15:54:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/680.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/680"
} | ### Why I did
The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`.
This type is `string`, then `bool('FALSE')` is equal to `True` in Python.
So, both rows are transformed into `True` now.
So, I modified this problem.
### What I did
I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/680/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/680/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/679/comments | https://api.github.com/repos/huggingface/datasets/issues/679/events | https://github.com/huggingface/datasets/pull/679 | 710,065,838 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx | 679 | Fix negative ids when slicing with an array | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-28T08:39:08Z" | "2020-09-28T14:42:20Z" | "2020-09-28T14:42:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/679.diff",
"html_url": "https://github.com/huggingface/datasets/pull/679",
"merged_at": "2020-09-28T14:42:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/679.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/679"
} | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[[0, -1]])
# OverflowError
```
raises an error because of the negative id.
This PR fixes that.
Fix #668 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/679/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/678/comments | https://api.github.com/repos/huggingface/datasets/issues/678/events | https://github.com/huggingface/datasets/issues/678 | 710,060,497 | MDU6SXNzdWU3MTAwNjA0OTc= | 678 | The download instructions for c4 datasets are not contained in the error message | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Narsil",
"id": 204321,
"login": "Narsil",
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"repos_url": "https://api.github.com/users/Narsil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Narsil"
} | [] | closed | false | null | [] | null | 2 | "2020-09-28T08:30:54Z" | "2020-09-28T10:26:09Z" | "2020-09-28T10:26:09Z" | CONTRIBUTOR | null | null | null | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>.
Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>')
```
Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.
Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/678/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/678/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/677/comments | https://api.github.com/repos/huggingface/datasets/issues/677/events | https://github.com/huggingface/datasets/pull/677 | 710,055,239 | MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3 | 677 | Move cache dir root creation in builder's init | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-28T08:22:46Z" | "2020-09-28T14:42:43Z" | "2020-09-28T14:42:42Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/677.diff",
"html_url": "https://github.com/huggingface/datasets/pull/677",
"merged_at": "2020-09-28T14:42:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/677.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/677"
} | We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init.
Fix #671 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/677/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/676/comments | https://api.github.com/repos/huggingface/datasets/issues/676/events | https://github.com/huggingface/datasets/issues/676 | 710,014,319 | MDU6SXNzdWU3MTAwMTQzMTk= | 676 | train_test_split returns empty dataset item | {
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"events_url": "https://api.github.com/users/mojave-pku/events{/privacy}",
"followers_url": "https://api.github.com/users/mojave-pku/followers",
"following_url": "https://api.github.com/users/mojave-pku/following{/other_user}",
"gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mojave-pku",
"id": 26648528,
"login": "mojave-pku",
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"organizations_url": "https://api.github.com/users/mojave-pku/orgs",
"received_events_url": "https://api.github.com/users/mojave-pku/received_events",
"repos_url": "https://api.github.com/users/mojave-pku/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mojave-pku"
} | [] | closed | false | null | [] | null | 4 | "2020-09-28T07:19:33Z" | "2020-10-07T13:46:33Z" | "2020-10-07T13:38:06Z" | NONE | null | null | null | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
print(yelp_data['test'])
print(yelp_data['test'][0])
```
The outputs:
```
{'stars': 2.0, 'text': 'xxxx'}
Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow
DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})
Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)
{} # yelp_data['test'][0] is empty
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/676/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/675/comments | https://api.github.com/repos/huggingface/datasets/issues/675/events | https://github.com/huggingface/datasets/issues/675 | 709,818,725 | MDU6SXNzdWU3MDk4MTg3MjU= | 675 | Add custom dataset to NLP? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timpal0l",
"id": 6556710,
"login": "timpal0l",
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timpal0l"
} | [] | closed | false | null | [] | null | 2 | "2020-09-27T21:22:50Z" | "2020-10-20T09:08:49Z" | "2020-10-20T09:08:49Z" | CONTRIBUTOR | null | null | null | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/675/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/674/comments | https://api.github.com/repos/huggingface/datasets/issues/674/events | https://github.com/huggingface/datasets/issues/674 | 709,661,006 | MDU6SXNzdWU3MDk2NjEwMDY= | 674 | load_dataset() won't download in Windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4",
"events_url": "https://api.github.com/users/ThisDavehead/events{/privacy}",
"followers_url": "https://api.github.com/users/ThisDavehead/followers",
"following_url": "https://api.github.com/users/ThisDavehead/following{/other_user}",
"gists_url": "https://api.github.com/users/ThisDavehead/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ThisDavehead",
"id": 34422661,
"login": "ThisDavehead",
"node_id": "MDQ6VXNlcjM0NDIyNjYx",
"organizations_url": "https://api.github.com/users/ThisDavehead/orgs",
"received_events_url": "https://api.github.com/users/ThisDavehead/received_events",
"repos_url": "https://api.github.com/users/ThisDavehead/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ThisDavehead/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThisDavehead/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ThisDavehead"
} | [] | closed | false | null | [] | null | 3 | "2020-09-27T03:56:25Z" | "2020-10-05T08:28:18Z" | "2020-10-05T08:28:18Z" | NONE | null | null | null | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/674/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/673/comments | https://api.github.com/repos/huggingface/datasets/issues/673/events | https://github.com/huggingface/datasets/issues/673 | 709,603,989 | MDU6SXNzdWU3MDk2MDM5ODk= | 673 | blog_authorship_corpus crashed | {
"avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4",
"events_url": "https://api.github.com/users/Moshiii/events{/privacy}",
"followers_url": "https://api.github.com/users/Moshiii/followers",
"following_url": "https://api.github.com/users/Moshiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moshiii",
"id": 7553188,
"login": "Moshiii",
"node_id": "MDQ6VXNlcjc1NTMxODg=",
"organizations_url": "https://api.github.com/users/Moshiii/orgs",
"received_events_url": "https://api.github.com/users/Moshiii/received_events",
"repos_url": "https://api.github.com/users/Moshiii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moshiii"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 1 | "2020-09-26T20:15:28Z" | "2022-02-15T10:47:58Z" | "2022-02-15T10:47:58Z" | NONE | null | null | null | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/673/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/672/comments | https://api.github.com/repos/huggingface/datasets/issues/672/events | https://github.com/huggingface/datasets/issues/672 | 709,575,527 | MDU6SXNzdWU3MDk1NzU1Mjc= | 672 | Questions about XSUM | {
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danyaljj",
"id": 2441454,
"login": "danyaljj",
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danyaljj"
} | [] | closed | false | null | [] | null | 14 | "2020-09-26T17:16:24Z" | "2022-10-04T17:30:17Z" | "2022-10-04T17:30:17Z" | CONTRIBUTOR | null | null | null | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
… training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/672/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/671/comments | https://api.github.com/repos/huggingface/datasets/issues/671/events | https://github.com/huggingface/datasets/issues/671 | 709,093,151 | MDU6SXNzdWU3MDkwOTMxNTE= | 671 | [BUG] No such file or directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg"
} | [] | closed | false | null | [] | null | 0 | "2020-09-25T16:38:54Z" | "2020-09-28T14:42:42Z" | "2020-09-28T14:42:42Z" | CONTRIBUTOR | null | null | null | This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested on v1.0.2
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/671/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/670/comments | https://api.github.com/repos/huggingface/datasets/issues/670/events | https://github.com/huggingface/datasets/pull/670 | 709,061,231 | MDExOlB1bGxSZXF1ZXN0NDkzMTc4OTQw | 670 | Fix SQuAD metric kwargs description | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-25T16:08:57Z" | "2020-09-29T15:57:39Z" | "2020-09-29T15:57:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/670.diff",
"html_url": "https://github.com/huggingface/datasets/pull/670",
"merged_at": "2020-09-29T15:57:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/670.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/670"
} | The `answer_start` field was missing in the kwargs docstring.
This should fix #657
FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field.
However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I think it's better to keep it this way, so that you can just give references=squad["answers"] to .compute().
Let me know what sounds the best for you
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/670/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/669/comments | https://api.github.com/repos/huggingface/datasets/issues/669/events | https://github.com/huggingface/datasets/issues/669 | 708,857,595 | MDU6SXNzdWU3MDg4NTc1OTU= | 669 | How to skip a example when running dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao"
} | [] | closed | false | null | [] | null | 3 | "2020-09-25T11:17:53Z" | "2022-06-17T21:45:03Z" | "2020-10-05T16:28:13Z" | NONE | null | null | null | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/669/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/668/comments | https://api.github.com/repos/huggingface/datasets/issues/668/events | https://github.com/huggingface/datasets/issues/668 | 708,310,956 | MDU6SXNzdWU3MDgzMTA5NTY= | 668 | OverflowError when slicing with an array containing negative ids | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-24T16:27:14Z" | "2020-09-28T14:42:19Z" | "2020-09-28T14:42:19Z" | MEMBER | null | null | null | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-5-863dc3555598> in <module>
----> 1 d[[0, -1]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1070 format_columns=self._format_columns,
1071 output_all_columns=self._output_all_columns,
-> 1072 format_kwargs=self._format_kwargs,
1073 )
1074
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1025 indices = key
1026
-> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64())
1028
1029 # Check if we need to convert indices
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
OverflowError: can't convert negative value to unsigned int
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/668/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/667/comments | https://api.github.com/repos/huggingface/datasets/issues/667/events | https://github.com/huggingface/datasets/issues/667 | 708,258,392 | MDU6SXNzdWU3MDgyNTgzOTI= | 667 | Loss not decrease with Datasets and Transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}",
"followers_url": "https://api.github.com/users/wangcongcong123/followers",
"following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wangcongcong123",
"id": 23032865,
"login": "wangcongcong123",
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"organizations_url": "https://api.github.com/users/wangcongcong123/orgs",
"received_events_url": "https://api.github.com/users/wangcongcong123/received_events",
"repos_url": "https://api.github.com/users/wangcongcong123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wangcongcong123"
} | [] | closed | false | null | [] | null | 2 | "2020-09-24T15:14:43Z" | "2021-01-01T20:01:25Z" | "2021-01-01T20:01:25Z" | NONE | null | null | null | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?
```python
import torch
from datasets import load_dataset
from transformers import BertForSequenceClassification
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = load_dataset("glue", 'sst2')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
del dataset["test"] # let's remove it in this demo
# Tokenize our training dataset
def convert_to_features(example_batch):
encodings = tokenizer(example_batch["sentence"])
encodings.update({"labels": example_batch["label"]})
return encodings
encoded_dataset = dataset.map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']
encoded_dataset.set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
# Let's do dynamic batching (pad on the fly with our own collate_fn)
def collate_fn(examples):
return tokenizer.pad(examples, return_tensors='pt')
dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Let's load a pretrained Bert model and a simple optimizer
model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
```
In case needed.
- datasets == 1.0.2
- transformers == 3.2.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/667/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/666/comments | https://api.github.com/repos/huggingface/datasets/issues/666/events | https://github.com/huggingface/datasets/issues/666 | 707,608,578 | MDU6SXNzdWU3MDc2MDg1Nzg= | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | {
"avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4",
"events_url": "https://api.github.com/users/wahab4114/events{/privacy}",
"followers_url": "https://api.github.com/users/wahab4114/followers",
"following_url": "https://api.github.com/users/wahab4114/following{/other_user}",
"gists_url": "https://api.github.com/users/wahab4114/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wahab4114",
"id": 31090427,
"login": "wahab4114",
"node_id": "MDQ6VXNlcjMxMDkwNDI3",
"organizations_url": "https://api.github.com/users/wahab4114/orgs",
"received_events_url": "https://api.github.com/users/wahab4114/received_events",
"repos_url": "https://api.github.com/users/wahab4114/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wahab4114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahab4114/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wahab4114"
} | [] | closed | false | null | [] | null | 1 | "2020-09-23T19:02:25Z" | "2020-10-27T15:19:25Z" | "2020-10-27T15:19:25Z" | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/666/timeline | null | completed | false |
|
https://api.github.com/repos/huggingface/datasets/issues/665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/665/comments | https://api.github.com/repos/huggingface/datasets/issues/665/events | https://github.com/huggingface/datasets/issues/665 | 707,037,738 | MDU6SXNzdWU3MDcwMzc3Mzg= | 665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao"
} | [] | closed | false | null | [] | null | 8 | "2020-09-23T04:28:14Z" | "2020-10-08T09:32:16Z" | "2020-10-08T09:32:16Z" | NONE | null | null | null | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)
context_encodings = tokenizer.encode_plus(example['context'])
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.
# this will give us the position of answer span in the context text
start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])
start_positions_context = context_encodings.char_to_token(start_idx)
end_positions_context = context_encodings.char_to_token(end_idx-1)
# here we will compute the start and end position of the answer in the whole example
# as the example is encoded like this <s> question</s></s> context</s>
# and we know the postion of the answer in the context
# we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)
# this will give us the position of the answer span in whole example
sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)
start_positions = start_positions_context + sep_idx + 1
end_positions = end_positions_context + sep_idx + 1
if end_positions > 512:
start_positions, end_positions = 0, 0
encodings.update({'start_positions': start_positions,
'end_positions': end_positions,
'attention_mask': encodings['attention_mask']})
return encodings
```
Then I run `dataset.map(convert_to_features)`, it raise
```
In [59]: a.map(convert_to_features)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-c453b508761d> in <module>
----> 1 a.map(convert_to_features)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/opt/conda/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj)
1436 globs, obj.__name__,
1437 obj.__defaults__, obj.__closure__,
-> 1438 obj.__dict__, fkwdefaults), obj=obj)
1439 else:
1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj)
787 write(MARK)
788 for element in obj:
--> 789 save(element)
790
791 if id(obj) in memo:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle Tokenizer objects
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/665/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/664/comments | https://api.github.com/repos/huggingface/datasets/issues/664/events | https://github.com/huggingface/datasets/issues/664 | 707,017,791 | MDU6SXNzdWU3MDcwMTc3OTE= | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xixiaoyao",
"id": 24541791,
"login": "xixiaoyao",
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xixiaoyao"
} | [] | closed | false | null | [] | null | 4 | "2020-09-23T03:53:36Z" | "2023-04-17T09:31:20Z" | "2020-10-20T09:06:13Z" | NONE | null | null | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/664/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/663/comments | https://api.github.com/repos/huggingface/datasets/issues/663/events | https://github.com/huggingface/datasets/pull/663 | 706,732,636 | MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz | 663 | Created dataset card snli.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
] | null | 11 | "2020-09-22T22:29:37Z" | "2020-10-13T17:05:20Z" | "2020-10-12T20:26:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/663.diff",
"html_url": "https://github.com/huggingface/datasets/pull/663",
"merged_at": "2020-10-12T20:26:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/663.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/663"
} | First draft of a dataset card using the SNLI corpus as an example.
This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around.
- I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated.
- I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved.
-- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible.
-- 'How was the data collected' out of **Who Was Involved**
-- Normalization out of **Features/Dataset Structure**
-- I also added an annotation process section.
- I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages.
- I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**?
The trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?)
I also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still? | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/663/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/662/comments | https://api.github.com/repos/huggingface/datasets/issues/662/events | https://github.com/huggingface/datasets/pull/662 | 706,689,866 | MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3 | 662 | Created dataset card snli.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | [] | null | 1 | "2020-09-22T21:00:17Z" | "2023-09-24T09:50:16Z" | "2020-09-22T21:26:21Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/662",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/662"
} | First draft of a dataset card using the SNLI corpus as an example | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/662/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/661/comments | https://api.github.com/repos/huggingface/datasets/issues/661/events | https://github.com/huggingface/datasets/pull/661 | 706,465,936 | MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw | 661 | Replace pa.OSFile by open | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-22T15:05:59Z" | "2021-05-05T18:24:36Z" | "2020-09-22T15:15:25Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/661",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/661"
} | It should fix #643 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/661/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/660/comments | https://api.github.com/repos/huggingface/datasets/issues/660/events | https://github.com/huggingface/datasets/pull/660 | 706,324,032 | MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0 | 660 | add openwebtext | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 3 | "2020-09-22T12:05:22Z" | "2020-10-06T09:20:10Z" | "2020-09-28T09:07:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/660",
"merged_at": "2020-09-28T09:07:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/660"
} | This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA.
It solves #132 .
### Besides dataset building script, I made some changes to the library.
1. Extract large amount of compressed files with multi processing
I add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing.
2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip)
Because there is no way to 100% detect a file is a zip file (see [this](https://stackoverflow.com/questions/18194688/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'./datasets/downloads/extracted/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027/openwebtext/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last.
3. `MockDownloadManager.extract`
Cuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation.
**Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days).
### There is something I think we can improve
4. Long time to decompress compressed files
Even I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files.
### Info about the source data
The source data is an tar.xz file with following structure, files/directory beyond compressed file is what can we get after decompress it.
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
And this the structure of dummy data, same as the original one.
```
dummy_data.zip
|__ dummy_data
|__ openwebtext
|__fake_subset-1_data-dirxz # actually it is a directory
| |__ ....txt
| |__ ....txt
|__ fake_subset-2_data-dirxz
|__ ....txt
|__ ....txt
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/660/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/659/comments | https://api.github.com/repos/huggingface/datasets/issues/659/events | https://github.com/huggingface/datasets/pull/659 | 706,231,506 | MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1 | 659 | Keep new columns in transmit format | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-22T09:47:23Z" | "2020-09-22T10:07:22Z" | "2020-09-22T10:07:20Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/659",
"merged_at": "2020-09-22T10:07:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/659"
} | When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list.
It caused `KeyError` issues in #620
I changed the logic to add those new columns to the list that `__getitem__` should return. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/659/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/658/comments | https://api.github.com/repos/huggingface/datasets/issues/658/events | https://github.com/huggingface/datasets/pull/658 | 706,206,247 | MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0 | 658 | Fix squad metric's Features | {
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tshrjn",
"id": 8372098,
"login": "tshrjn",
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tshrjn"
} | [] | closed | false | null | [] | null | 1 | "2020-09-22T09:09:52Z" | "2020-09-29T15:58:30Z" | "2020-09-29T15:58:30Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/658",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/658"
} | Resolves issue [657](https://github.com/huggingface/datasets/issues/657). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/658/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/657/comments | https://api.github.com/repos/huggingface/datasets/issues/657/events | https://github.com/huggingface/datasets/issues/657 | 706,204,383 | MDU6SXNzdWU3MDYyMDQzODM= | 657 | Squad Metric Description & Feature Mismatch | {
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tshrjn",
"id": 8372098,
"login": "tshrjn",
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tshrjn"
} | [] | closed | false | null | [] | null | 2 | "2020-09-22T09:07:00Z" | "2020-10-13T02:16:56Z" | "2020-09-29T15:57:38Z" | NONE | null | null | null | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/657/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/656/comments | https://api.github.com/repos/huggingface/datasets/issues/656/events | https://github.com/huggingface/datasets/pull/656 | 705,736,319 | MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz | 656 | Use multiprocess from pathos for multiprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2020-09-21T16:12:19Z" | "2020-09-28T14:45:40Z" | "2020-09-28T14:45:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/656.diff",
"html_url": "https://github.com/huggingface/datasets/pull/656",
"merged_at": "2020-09-28T14:45:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/656.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/656"
} | [Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map.
It was suggested to use it by @kandorm.
We're already using dill which is its only dependency. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/656/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/655/comments | https://api.github.com/repos/huggingface/datasets/issues/655/events | https://github.com/huggingface/datasets/pull/655 | 705,672,208 | MDExOlB1bGxSZXF1ZXN0NDkwMzU4OTQ3 | 655 | added Winogrande debiased subset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | closed | false | null | [] | null | 2 | "2020-09-21T14:51:08Z" | "2020-09-21T16:20:40Z" | "2020-09-21T16:16:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/655.diff",
"html_url": "https://github.com/huggingface/datasets/pull/655",
"merged_at": "2020-09-21T16:16:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/655.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/655"
} | The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/655/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/654/comments | https://api.github.com/repos/huggingface/datasets/issues/654/events | https://github.com/huggingface/datasets/pull/654 | 705,511,058 | MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3 | 654 | Allow empty inputs in metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-21T11:26:36Z" | "2020-10-06T03:51:48Z" | "2020-09-21T16:13:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/654.diff",
"html_url": "https://github.com/huggingface/datasets/pull/654",
"merged_at": "2020-09-21T16:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/654.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/654"
} | There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/654/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/653/comments | https://api.github.com/repos/huggingface/datasets/issues/653/events | https://github.com/huggingface/datasets/pull/653 | 705,482,391 | MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4 | 653 | handle data alteration when trying type | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-21T10:41:49Z" | "2020-09-21T16:13:06Z" | "2020-09-21T16:13:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/653",
"merged_at": "2020-09-21T16:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/653"
} | Fix #649
The bug came from the type inference that didn't handle a weird case in Pyarrow.
Indeed this code runs without error but alters the data in arrow:
```python
import pyarrow as pa
type = pa.struct({"a": pa.struct({"b": pa.string()})})
array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type)
print(array_with_altered_data[0].as_py())
# {'a': {'b': 'foo'}} -> the sub-field "c" is missing
```
(I don't know if this is intended in pyarrow tbh)
We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data.
To fix that I added a line that checks that the first element of the array is not altered. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/653/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/652/comments | https://api.github.com/repos/huggingface/datasets/issues/652/events | https://github.com/huggingface/datasets/pull/652 | 705,390,850 | MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx | 652 | handle connection error in download_prepared_from_hf_gcs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-21T08:21:11Z" | "2020-09-21T08:28:43Z" | "2020-09-21T08:28:42Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/652",
"merged_at": "2020-09-21T08:28:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/652"
} | Fix #647 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/652/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/651/comments | https://api.github.com/repos/huggingface/datasets/issues/651/events | https://github.com/huggingface/datasets/issues/651 | 705,212,034 | MDU6SXNzdWU3MDUyMTIwMzQ= | 651 | Problem with JSON dataset format | {
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikigenius",
"id": 12724810,
"login": "vikigenius",
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikigenius"
} | [] | open | false | null | [] | null | 2 | "2020-09-20T23:57:14Z" | "2020-09-21T12:14:24Z" | null | NONE | null | null | null | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record.
Reading this with json:
```
data = datasets.load('json', data_files='path_to_local.json')
```
Throws an error and asks me to chose a field. What's the right way to handle this? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/651/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/650/comments | https://api.github.com/repos/huggingface/datasets/issues/650/events | https://github.com/huggingface/datasets/issues/650 | 704,861,844 | MDU6SXNzdWU3MDQ4NjE4NDQ= | 650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 4 | "2020-09-19T11:07:03Z" | "2020-09-22T11:54:10Z" | "2020-09-22T11:54:09Z" | CONTRIBUTOR | null | null | null | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
def _split_generators(self, dl_manager):
dl_dir = dl_manager.download_and_extract(_URL)
owt_dir = os.path.join(dl_dir, 'openwebtext')
subset_xzs = [
os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock
]
ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75))
nested_txt_files = [
[
os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt')
] for ex_dir in ex_dirs
]
txt_files = chain(*nested_txt_files)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files}
),
]
```
All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me.
How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/650/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/649/comments | https://api.github.com/repos/huggingface/datasets/issues/649/events | https://github.com/huggingface/datasets/issues/649 | 704,838,415 | MDU6SXNzdWU3MDQ4Mzg0MTU= | 649 | Inconsistent behavior in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/10166085?v=4",
"events_url": "https://api.github.com/users/krandiash/events{/privacy}",
"followers_url": "https://api.github.com/users/krandiash/followers",
"following_url": "https://api.github.com/users/krandiash/following{/other_user}",
"gists_url": "https://api.github.com/users/krandiash/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krandiash",
"id": 10166085,
"login": "krandiash",
"node_id": "MDQ6VXNlcjEwMTY2MDg1",
"organizations_url": "https://api.github.com/users/krandiash/orgs",
"received_events_url": "https://api.github.com/users/krandiash/received_events",
"repos_url": "https://api.github.com/users/krandiash/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krandiash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krandiash/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krandiash"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 1 | "2020-09-19T08:41:12Z" | "2020-09-21T16:13:05Z" | "2020-09-21T16:13:05Z" | NONE | null | null | null | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
print(dataset[0])
# outputs
{'field': 'a'}
# Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital'
dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}})
print(dataset[0])
# output is okay
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield'
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0])
# printing out the first example after applying the map shows that the new key 'append_x' doesn't get added
# it also messes up the value stored at 'capital'
{'field': 'a', 'otherfield': {'capital': None}}
# Instead, I try to do the same thing by using a different mapped fn
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0])
# this preserves the value under capital, but still no 'append_x'
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Instead, I try to pass 'otherfield' to remove_columns
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0])
# this still doesn't fix the problem
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset.
# Recreate the dataset
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
# Now map the entire 'otherfield' dict directly, instead of incrementally as before
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0])
# This looks good!
{'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}}
```
This might be a new issue, because I didn't see this behavior in the `nlp` library.
Any help is appreciated! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/649/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/648/comments | https://api.github.com/repos/huggingface/datasets/issues/648/events | https://github.com/huggingface/datasets/issues/648 | 704,753,123 | MDU6SXNzdWU3MDQ3NTMxMjM= | 648 | offset overflow when multiprocessing batched map on large datasets. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 2 | "2020-09-19T02:15:11Z" | "2020-09-19T16:47:07Z" | "2020-09-19T16:46:31Z" | CONTRIBUTOR | null | null | null | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single
batch = self[i : i + batch_size]
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__
format_kwargs=self._format_kwargs,
File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem
data_subset = self._data.take(indices_array)
File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take
return call_function('take', [data, indices], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
"""
The above exception was the direct cause of the following exception:
ArrowInvalid Traceback (most recent call last)
in
30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train']
31 print('load/create data from OpenWebText Corpus for ELECTRA')
---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow")
33 dsets.append(e_owt)
34
~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs)
126 writer_batch_size=10**4,
127 num_proc=num_proc,
--> 128 **kwargs
129 )
130
~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs)
21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'
22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)
---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)
24
25 @patch
~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/datasets/src/datasets/arrow_dataset.py in (.0)
1285 logger.info("Spawning {} processes".format(num_proc))
1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1287 transformed_shards = [r.get() for r in results]
1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1289 result = concatenate_datasets(transformed_shards)
~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
ArrowInvalid: offset overflow while concatenating arrays
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/648/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/647/comments | https://api.github.com/repos/huggingface/datasets/issues/647/events | https://github.com/huggingface/datasets/issues/647 | 704,734,764 | MDU6SXNzdWU3MDQ3MzQ3NjQ= | 647 | Cannot download dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chiyuzhang94",
"id": 33407613,
"login": "chiyuzhang94",
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chiyuzhang94"
} | [] | closed | false | null | [] | null | 4 | "2020-09-19T01:35:15Z" | "2020-09-21T08:28:42Z" | "2020-09-21T08:28:42Z" | NONE | null | null | null | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json
```
I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually?
Versions:
Python version 3.7.3
PyTorch version 1.6.0
TensorFlow version 2.3.0
datasets version: 1.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/647/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/646/comments | https://api.github.com/repos/huggingface/datasets/issues/646/events | https://github.com/huggingface/datasets/pull/646 | 704,607,371 | MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3 | 646 | Fix docs typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2020-09-18T19:32:27Z" | "2020-09-21T16:30:54Z" | "2020-09-21T16:14:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/646",
"merged_at": "2020-09-21T16:14:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/646"
} | This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/646/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/645/comments | https://api.github.com/repos/huggingface/datasets/issues/645/events | https://github.com/huggingface/datasets/pull/645 | 704,542,234 | MDExOlB1bGxSZXF1ZXN0NDg5NDQ5MjAx | 645 | Don't use take on dataset table in pyarrow 1.0.x | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2020-09-18T17:31:34Z" | "2023-09-19T07:59:19Z" | "2020-09-19T16:46:31Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/645.diff",
"html_url": "https://github.com/huggingface/datasets/pull/645",
"merged_at": "2020-09-19T16:46:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/645.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/645"
} | Fix #615 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/645/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/644/comments | https://api.github.com/repos/huggingface/datasets/issues/644/events | https://github.com/huggingface/datasets/pull/644 | 704,534,501 | MDExOlB1bGxSZXF1ZXN0NDg5NDQzMTk1 | 644 | Better windows support | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-09-18T17:17:36Z" | "2020-09-25T14:02:30Z" | "2020-09-25T14:02:28Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/644",
"merged_at": "2020-09-25T14:02:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/644"
} | There are a few differences in the behavior of python and pyarrow on windows.
For example there are restrictions when accessing/deleting files that are open
Fix #590 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/644/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/643/comments | https://api.github.com/repos/huggingface/datasets/issues/643/events | https://github.com/huggingface/datasets/issues/643 | 704,477,164 | MDU6SXNzdWU3MDQ0NzcxNjQ= | 643 | Caching processed dataset at wrong folder | {
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mrm8488",
"id": 3653789,
"login": "mrm8488",
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mrm8488"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 13 | "2020-09-18T15:41:26Z" | "2022-02-16T14:53:29Z" | "2022-02-16T14:53:29Z" | CONTRIBUTOR | null | null | null | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = dataset.map(encode, batched=True)
```
The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it.
The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs.
What gets me crazy, it prints it is processing/encoding the dataset in the right folder:
```
Testing the mapped function outputs
Testing finished, running the mapping function on the dataset
Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/643/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/642/comments | https://api.github.com/repos/huggingface/datasets/issues/642/events | https://github.com/huggingface/datasets/pull/642 | 704,397,499 | MDExOlB1bGxSZXF1ZXN0NDg5MzMwMDAx | 642 | Rename wnut fields | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-18T13:51:31Z" | "2020-09-18T17:18:31Z" | "2020-09-18T17:18:30Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/642",
"merged_at": "2020-09-18T17:18:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/642"
} | As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/642/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/641/comments | https://api.github.com/repos/huggingface/datasets/issues/641/events | https://github.com/huggingface/datasets/pull/641 | 704,373,940 | MDExOlB1bGxSZXF1ZXN0NDg5MzExOTU3 | 641 | Add Polyglot-NER Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
} | [] | closed | false | null | [] | null | 7 | "2020-09-18T13:21:44Z" | "2020-09-20T03:04:43Z" | "2020-09-20T03:04:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/641.diff",
"html_url": "https://github.com/huggingface/datasets/pull/641",
"merged_at": "2020-09-20T03:04:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/641.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/641"
} | Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/641/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/640/comments | https://api.github.com/repos/huggingface/datasets/issues/640/events | https://github.com/huggingface/datasets/pull/640 | 704,311,758 | MDExOlB1bGxSZXF1ZXN0NDg5MjYwNTc1 | 640 | Make shuffle compatible with temp_seed | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-18T11:38:58Z" | "2020-09-18T11:47:51Z" | "2020-09-18T11:47:50Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/640",
"merged_at": "2020-09-18T11:47:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/640"
} | This code used to return different dataset at each run
```python
import dataset as ds
dataset = ...
with ds.temp_seed(42):
shuffled = dataset.shuffle()
```
Now it returns the same one since the seed is set | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/640/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/639/comments | https://api.github.com/repos/huggingface/datasets/issues/639/events | https://github.com/huggingface/datasets/pull/639 | 704,217,963 | MDExOlB1bGxSZXF1ZXN0NDg5MTgxOTY3 | 639 | Update glue QQP checksum | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-18T09:08:15Z" | "2020-09-18T11:37:08Z" | "2020-09-18T11:37:07Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/639.diff",
"html_url": "https://github.com/huggingface/datasets/pull/639",
"merged_at": "2020-09-18T11:37:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/639.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/639"
} | Fix #638 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/639/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/639/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/638/comments | https://api.github.com/repos/huggingface/datasets/issues/638/events | https://github.com/huggingface/datasets/issues/638 | 704,146,956 | MDU6SXNzdWU3MDQxNDY5NTY= | 638 | GLUE/QQP dataset: NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | 1 | "2020-09-18T07:09:10Z" | "2020-09-18T11:37:07Z" | "2020-09-18T11:37:07Z" | CONTRIBUTOR | null | null | null | Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_dataset('glue','qqp', cache_dir='./datasets')`
```
Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
in
----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets')
~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
467 if not downloaded_from_gcs:
468 self._download_and_prepare(
--> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
470 )
471 # Sync info
~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
527 if verify_infos:
528 verify_checksums(
--> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
530 )
531
~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/638/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/637/comments | https://api.github.com/repos/huggingface/datasets/issues/637/events | https://github.com/huggingface/datasets/pull/637 | 703,539,909 | MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4 | 637 | Add MATINF | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JetRunner",
"id": 22514219,
"login": "JetRunner",
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JetRunner"
} | [] | closed | false | null | [] | null | 0 | "2020-09-17T12:24:53Z" | "2020-09-17T13:23:18Z" | "2020-09-17T13:23:17Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/637.diff",
"html_url": "https://github.com/huggingface/datasets/pull/637",
"merged_at": "2020-09-17T13:23:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/637.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/637"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/637/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/636/comments | https://api.github.com/repos/huggingface/datasets/issues/636/events | https://github.com/huggingface/datasets/pull/636 | 702,883,989 | MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5 | 636 | Consistent ner features | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-09-16T15:56:25Z" | "2020-09-17T09:52:59Z" | "2020-09-17T09:52:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/636",
"merged_at": "2020-09-17T09:52:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/636"
} | As discussed in #613 , this PR aims at making NER feature names consistent across datasets.
I changed the feature names of LinCE and XTREME/PAN-X | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/636/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/635/comments | https://api.github.com/repos/huggingface/datasets/issues/635/events | https://github.com/huggingface/datasets/pull/635 | 702,822,439 | MDExOlB1bGxSZXF1ZXN0NDg4MDM2OTE5 | 635 | Loglevel | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-09-16T14:37:53Z" | "2020-09-17T09:52:19Z" | "2020-09-17T09:52:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/635.diff",
"html_url": "https://github.com/huggingface/datasets/pull/635",
"merged_at": "2020-09-17T09:52:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/635.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/635"
} | Continuation of #618 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/635/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/635/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/634/comments | https://api.github.com/repos/huggingface/datasets/issues/634/events | https://github.com/huggingface/datasets/pull/634 | 702,676,041 | MDExOlB1bGxSZXF1ZXN0NDg3OTEzOTk4 | 634 | Add ConLL-2000 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vblagoje",
"id": 458335,
"login": "vblagoje",
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vblagoje"
} | [] | closed | false | null | [] | null | 0 | "2020-09-16T11:14:11Z" | "2020-09-17T10:38:10Z" | "2020-09-17T10:38:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/634.diff",
"html_url": "https://github.com/huggingface/datasets/pull/634",
"merged_at": "2020-09-17T10:38:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/634.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/634"
} | Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/634/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/633/comments | https://api.github.com/repos/huggingface/datasets/issues/633/events | https://github.com/huggingface/datasets/issues/633 | 702,440,484 | MDU6SXNzdWU3MDI0NDA0ODQ= | 633 | Load large text file for LM pre-training resulting in OOM | {
"avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4",
"events_url": "https://api.github.com/users/leethu2012/events{/privacy}",
"followers_url": "https://api.github.com/users/leethu2012/followers",
"following_url": "https://api.github.com/users/leethu2012/following{/other_user}",
"gists_url": "https://api.github.com/users/leethu2012/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leethu2012",
"id": 29704017,
"login": "leethu2012",
"node_id": "MDQ6VXNlcjI5NzA0MDE3",
"organizations_url": "https://api.github.com/users/leethu2012/orgs",
"received_events_url": "https://api.github.com/users/leethu2012/received_events",
"repos_url": "https://api.github.com/users/leethu2012/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leethu2012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leethu2012/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leethu2012"
} | [] | open | false | null | [] | null | 27 | "2020-09-16T04:33:15Z" | "2021-02-16T12:02:01Z" | null | NONE | null | null | null | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator used for language modeling based on DataCollatorForLazyLanguageModeling
- collates batches of tensors, honoring their tokenizer's pad_token
- preprocesses batches for masked language modeling
"""
block_size: int = 512
def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]:
examples = [example['text'] for example in examples]
batch, attention_mask = self._tensorize_batch(examples)
if self.mlm:
inputs, labels = self.mask_tokens(batch)
return {"input_ids": inputs, "labels": labels}
else:
labels = batch.clone().detach()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
return {"input_ids": batch, "labels": labels}
def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]:
if self.tokenizer._pad_token is None:
raise ValueError(
"You are attempting to pad samples but the tokenizer you are using"
f" ({self.tokenizer.__class__.__name__}) does not have one."
)
tensor_examples = self.tokenizer.batch_encode_plus(
[ex for ex in examples if ex],
max_length=self.block_size,
return_tensors="pt",
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
)
input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"]
return input_ids, attention_mask
dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train')
data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True,
mlm_probability=0.15, block_size=tokenizer.max_len)
trainer = Trainer(model=model, args=args, data_collator=data_collator,
train_dataset=train_dataset, prediction_loss_only=True, )
trainer.train(model_path=model_path)
```
This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words.
During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training.

Could you please give me any suggestions on why this happened and how to fix it?
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/633/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/632/comments | https://api.github.com/repos/huggingface/datasets/issues/632/events | https://github.com/huggingface/datasets/pull/632 | 702,358,124 | MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2 | 632 | Fix typos in the loading datasets docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 1 | "2020-09-16T00:27:41Z" | "2020-09-21T16:31:11Z" | "2020-09-16T06:52:44Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/632.diff",
"html_url": "https://github.com/huggingface/datasets/pull/632",
"merged_at": "2020-09-16T06:52:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/632.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/632"
} | This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/632/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/631/comments | https://api.github.com/repos/huggingface/datasets/issues/631/events | https://github.com/huggingface/datasets/pull/631 | 701,711,255 | MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0 | 631 | Fix text delimiter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2020-09-15T08:08:42Z" | "2020-09-22T15:03:06Z" | "2020-09-15T08:26:25Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/631",
"merged_at": "2020-09-15T08:26:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/631"
} | I changed the delimiter in the `text` dataset script.
It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622
I changed the delimiter to an unused ascii character that is not present in text files : `\b` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/631/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/630/comments | https://api.github.com/repos/huggingface/datasets/issues/630/events | https://github.com/huggingface/datasets/issues/630 | 701,636,350 | MDU6SXNzdWU3MDE2MzYzNTA= | 630 | Text dataset not working with large files | {
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae"
} | [] | closed | false | null | [] | null | 11 | "2020-09-15T06:02:36Z" | "2020-09-25T22:21:43Z" | "2020-09-25T22:21:43Z" | NONE | null | null | null | ```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset
dataset = load_dataset("text", data_files=file_path, split='train+test')
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables
convert_options=self.config.convert_options,
File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
```
**pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**
It gives the same message for both 200MB, 10GB .tx files but not for 700MB file.
Can't upload due to size & copyright problem. sorry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/630/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/629/comments | https://api.github.com/repos/huggingface/datasets/issues/629/events | https://github.com/huggingface/datasets/issues/629 | 701,517,550 | MDU6SXNzdWU3MDE1MTc1NTA= | 629 | straddling object straddles two block boundaries | {
"avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4",
"events_url": "https://api.github.com/users/bharaniabhishek123/events{/privacy}",
"followers_url": "https://api.github.com/users/bharaniabhishek123/followers",
"following_url": "https://api.github.com/users/bharaniabhishek123/following{/other_user}",
"gists_url": "https://api.github.com/users/bharaniabhishek123/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharaniabhishek123",
"id": 17970177,
"login": "bharaniabhishek123",
"node_id": "MDQ6VXNlcjE3OTcwMTc3",
"organizations_url": "https://api.github.com/users/bharaniabhishek123/orgs",
"received_events_url": "https://api.github.com/users/bharaniabhishek123/received_events",
"repos_url": "https://api.github.com/users/bharaniabhishek123/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharaniabhishek123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharaniabhishek123/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharaniabhishek123"
} | [] | closed | false | null | [] | null | 1 | "2020-09-15T00:30:46Z" | "2020-09-15T00:36:17Z" | "2020-09-15T00:32:17Z" | NONE | null | null | null | I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below :
I tried calling read_json with readOptions but no luck .
```
table = json.read_json(fn)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/629/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/628/comments | https://api.github.com/repos/huggingface/datasets/issues/628/events | https://github.com/huggingface/datasets/pull/628 | 701,496,053 | MDExOlB1bGxSZXF1ZXN0NDg2OTQyNzgx | 628 | Update docs links in the contribution guideline | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | 1 | "2020-09-14T23:27:19Z" | "2020-11-02T21:03:23Z" | "2020-09-15T06:19:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/628",
"merged_at": "2020-09-15T06:19:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/628"
} | Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/628/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/627/comments | https://api.github.com/repos/huggingface/datasets/issues/627/events | https://github.com/huggingface/datasets/pull/627 | 701,411,661 | MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2 | 627 | fix (#619) MLQA features names | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | 0 | "2020-09-14T20:41:59Z" | "2020-11-02T21:04:32Z" | "2020-09-16T06:54:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/627",
"merged_at": "2020-09-16T06:54:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/627"
} | Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/627/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/626/comments | https://api.github.com/repos/huggingface/datasets/issues/626/events | https://github.com/huggingface/datasets/pull/626 | 701,352,605 | MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1 | 626 | Update GLUE URLs (now hosted on FB) | {
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"events_url": "https://api.github.com/users/jeswan/events{/privacy}",
"followers_url": "https://api.github.com/users/jeswan/followers",
"following_url": "https://api.github.com/users/jeswan/following{/other_user}",
"gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jeswan",
"id": 57466294,
"login": "jeswan",
"node_id": "MDQ6VXNlcjU3NDY2Mjk0",
"organizations_url": "https://api.github.com/users/jeswan/orgs",
"received_events_url": "https://api.github.com/users/jeswan/received_events",
"repos_url": "https://api.github.com/users/jeswan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeswan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jeswan"
} | [] | closed | false | null | [] | null | 0 | "2020-09-14T19:05:39Z" | "2020-09-16T06:53:18Z" | "2020-09-16T06:53:18Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/626",
"merged_at": "2020-09-16T06:53:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/626"
} | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
Note: rebased on huggingface/datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/626/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/625/comments | https://api.github.com/repos/huggingface/datasets/issues/625/events | https://github.com/huggingface/datasets/issues/625 | 701,057,799 | MDU6SXNzdWU3MDEwNTc3OTk= | 625 | dtype of tensors should be preserved | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | 9 | "2020-09-14T12:38:05Z" | "2021-08-17T08:30:04Z" | "2021-08-17T08:30:04Z" | CONTRIBUTOR | null | null | null | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)).
As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:
```python
def preprocess(sentences: List[str]):
token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]
sembeddings = stransformer.encode(sentences)
print(sembeddings.dtype)
return {"input_ids": token_ids, "sembedding": sembeddings}
```
Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32.
It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case.
My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.
```python
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
```
This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64.
```python
import torch
import numpy as np
l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]
torch_tensor = torch.tensor(l)
np_array = np.array(l)
np_to_torch = torch.from_numpy(np_array)
print(torch_tensor.dtype)
# torch.float32
print(np_array.dtype)
# float64
print(np_to_torch.dtype)
# torch.float64
```
This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.
The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/625/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/624/comments | https://api.github.com/repos/huggingface/datasets/issues/624/events | https://github.com/huggingface/datasets/issues/624 | 700,541,628 | MDU6SXNzdWU3MDA1NDE2Mjg= | 624 | Add learningq dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4",
"events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}",
"followers_url": "https://api.github.com/users/krrishdholakia/followers",
"following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}",
"gists_url": "https://api.github.com/users/krrishdholakia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krrishdholakia",
"id": 17561003,
"login": "krrishdholakia",
"node_id": "MDQ6VXNlcjE3NTYxMDAz",
"organizations_url": "https://api.github.com/users/krrishdholakia/orgs",
"received_events_url": "https://api.github.com/users/krrishdholakia/received_events",
"repos_url": "https://api.github.com/users/krrishdholakia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krrishdholakia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krrishdholakia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krrishdholakia"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | 0 | "2020-09-13T10:20:27Z" | "2020-09-14T09:50:02Z" | null | NONE | null | null | null | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/624/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/623/comments | https://api.github.com/repos/huggingface/datasets/issues/623/events | https://github.com/huggingface/datasets/issues/623 | 700,235,308 | MDU6SXNzdWU3MDAyMzUzMDg= | 623 | Custom feature types in `load_dataset` from CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lvwerra",
"id": 8264887,
"login": "lvwerra",
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lvwerra"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 7 | "2020-09-12T13:21:34Z" | "2020-09-30T19:51:43Z" | "2020-09-30T08:39:54Z" | MEMBER | null | null | null | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the following code:
```Python
from pathlib import Path
import wget
EMOTION_PATH = Path("./data/emotion")
DOWNLOAD_URLS = [
"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1",
"https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1",
"https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1",
]
if not Path.is_dir(EMOTION_PATH):
Path.mkdir(EMOTION_PATH)
for url in DOWNLOAD_URLS:
wget.download(url, str(EMOTION_PATH))
```
The first five lines of the train set are:
```
i didnt feel humiliated;sadness
i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness
im grabbing a minute to post i feel greedy wrong;anger
i am ever feeling nostalgic about the fireplace i will know that it is still on the property;love
i am feeling grouchy;anger
```
Here the code to reproduce the issue:
```Python
from datasets import Features, Value, ClassLabel, load_dataset
class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
emotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)})
file_dict = {'train': EMOTION_PATH/'train.txt'}
dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features)
```
**Observed behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': Value(dtype='string', id=None)}
```
**Expected behaviour:**
```Python
dataset['train'].features
```
```Python
{'text': Value(dtype='string', id=None),
'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)}
```
**Things I've tried:**
- deleting the cache
- trying other types such as `int64`
Am I missing anything? Thanks for any pointer in the right direction. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/623/timeline | null | completed | false |
Subsets and Splits