url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2037/comments | https://api.github.com/repos/huggingface/datasets/issues/2037/events | https://github.com/huggingface/datasets/pull/2037 | 829,919,685 | MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz | 2,037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | {
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/miyamonz",
"id": 6331508,
"login": "miyamonz",
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/miyamonz"
} | [] | closed | false | null | [] | null | 1 | "2021-03-12T09:22:00Z" | "2021-03-23T06:08:16Z" | "2021-03-16T11:01:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"merged_at": "2021-03-16T11:01:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037"
} | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2037/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2036/comments | https://api.github.com/repos/huggingface/datasets/issues/2036/events | https://github.com/huggingface/datasets/issues/2036 | 829,909,258 | MDU6SXNzdWU4Mjk5MDkyNTg= | 2,036 | Cannot load wikitext | {
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gpwner",
"id": 19349207,
"login": "Gpwner",
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gpwner"
} | [] | closed | false | null | [] | null | 1 | "2021-03-12T09:09:39Z" | "2021-03-15T08:45:02Z" | "2021-03-15T08:44:44Z" | NONE | null | null | null | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2036/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2035/comments | https://api.github.com/repos/huggingface/datasets/issues/2035/events | https://github.com/huggingface/datasets/issues/2035 | 829,475,544 | MDU6SXNzdWU4Mjk0NzU1NDQ= | 2,035 | wiki40b/wikipedia for almost all languages cannot be downloaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | 10 | "2021-03-11T19:54:54Z" | "2021-03-16T14:53:37Z" | null | NONE | null | null | null | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources.
thank you very much.
```
(fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py
Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...
Traceback (most recent call last):
File "test_data.py", line 3, in <module>
dataset = load_dataset("wiki40b", "cs")
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare
import apache_beam as beam
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module>
from apache_beam import io
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module>
from apache_beam.io.avroio import *
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module>
import avro
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module>
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource
NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2035/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2034/comments | https://api.github.com/repos/huggingface/datasets/issues/2034/events | https://github.com/huggingface/datasets/pull/2034 | 829,381,388 | MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw | 2,034 | Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4",
"events_url": "https://api.github.com/users/pcyin/events{/privacy}",
"followers_url": "https://api.github.com/users/pcyin/followers",
"following_url": "https://api.github.com/users/pcyin/following{/other_user}",
"gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pcyin",
"id": 3413464,
"login": "pcyin",
"node_id": "MDQ6VXNlcjM0MTM0NjQ=",
"organizations_url": "https://api.github.com/users/pcyin/orgs",
"received_events_url": "https://api.github.com/users/pcyin/received_events",
"repos_url": "https://api.github.com/users/pcyin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcyin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pcyin"
} | [] | closed | false | null | [] | null | 0 | "2021-03-11T17:46:13Z" | "2021-03-11T18:06:25Z" | "2021-03-11T18:06:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"merged_at": "2021-03-11T18:06:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034"
} | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2034/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2033/comments | https://api.github.com/repos/huggingface/datasets/issues/2033/events | https://github.com/huggingface/datasets/pull/2033 | 829,295,339 | MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy | 2,033 | Raise an error for outdated sacrebleu versions | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-11T16:08:00Z" | "2021-03-11T17:58:12Z" | "2021-03-11T17:58:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2033.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2033",
"merged_at": "2021-03-11T17:58:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2033.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2033"
} | The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12
For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py):
```python
def _compute(
self,
predictions,
references,
smooth_method="exp",
smooth_value=None,
force=False,
lowercase=False,
tokenize=scb.DEFAULT_TOKENIZER,
use_effective_order=False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> output = scb.corpus_bleu(
sys_stream=predictions,
ref_streams=transformed_references,
smooth_method=smooth_method,
smooth_value=smooth_value,
force=force,
lowercase=lowercase,
tokenize=tokenize,
use_effective_order=use_effective_order,
)
E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method'
/mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError
```
I improved the error message when users have an outdated version of sacrebleu.
The new error message tells the user to update sacrebleu.
cc @LysandreJik | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2033/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2032/comments | https://api.github.com/repos/huggingface/datasets/issues/2032/events | https://github.com/huggingface/datasets/issues/2032 | 829,250,912 | MDU6SXNzdWU4MjkyNTA5MTI= | 2,032 | Use Arrow filtering instead of writing a new arrow file for Dataset.filter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
] | null | 1 | "2021-03-11T15:18:50Z" | "2024-01-19T13:26:32Z" | "2024-01-19T13:26:32Z" | MEMBER | null | null | null | Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)`
- if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)`
The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table.
The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask.
Feel free to discuss this idea in this thread :)
One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle.
cc @theo-m @gchhablani
related issues: #1796 #1949 | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2032/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2031/comments | https://api.github.com/repos/huggingface/datasets/issues/2031/events | https://github.com/huggingface/datasets/issues/2031 | 829,122,778 | MDU6SXNzdWU4MjkxMjI3Nzg= | 2,031 | wikipedia.py generator that extracts XML doesn't release memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/miyamonz",
"id": 6331508,
"login": "miyamonz",
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/miyamonz"
} | [] | closed | false | null | [] | null | 2 | "2021-03-11T12:51:24Z" | "2021-03-22T08:33:52Z" | "2021-03-22T08:33:52Z" | CONTRIBUTOR | null | null | null | I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502
`root.clear()` intend to clear memory, but it doesn't.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494
I replaced them with `elem.clear()`, then it seems to work correctly.
here is the notebook to reproduce it.
https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2031/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2030/comments | https://api.github.com/repos/huggingface/datasets/issues/2030/events | https://github.com/huggingface/datasets/pull/2030 | 829,110,803 | MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4 | 2,030 | Implement Dataset from text | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 1 | "2021-03-11T12:34:50Z" | "2021-03-18T13:29:29Z" | "2021-03-18T13:29:29Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"merged_at": "2021-03-18T13:29:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030"
} | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2030/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2029/comments | https://api.github.com/repos/huggingface/datasets/issues/2029/events | https://github.com/huggingface/datasets/issues/2029 | 829,097,290 | MDU6SXNzdWU4MjkwOTcyOTA= | 2,029 | Loading a faiss index KeyError | {
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nbroad1881",
"id": 24982805,
"login": "nbroad1881",
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nbroad1881"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 4 | "2021-03-11T12:16:13Z" | "2021-03-12T00:21:09Z" | "2021-03-12T00:21:09Z" | NONE | null | null | null | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2029/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2028/comments | https://api.github.com/repos/huggingface/datasets/issues/2028/events | https://github.com/huggingface/datasets/pull/2028 | 828,721,393 | MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx | 2,028 | Adding PersiNLU reading-comprehension | {
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danyaljj",
"id": 2441454,
"login": "danyaljj",
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danyaljj"
} | [] | closed | false | null | [] | null | 3 | "2021-03-11T04:41:13Z" | "2021-03-15T09:39:57Z" | "2021-03-15T09:39:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2028.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2028",
"merged_at": "2021-03-15T09:39:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2028.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2028"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2028/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2027/comments | https://api.github.com/repos/huggingface/datasets/issues/2027/events | https://github.com/huggingface/datasets/pull/2027 | 828,490,444 | MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1 | 2,027 | Update format columns in Dataset.rename_columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-03-10T23:50:59Z" | "2021-03-11T14:38:40Z" | "2021-03-11T14:38:40Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2027.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2027",
"merged_at": "2021-03-11T14:38:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2027.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2027"
} | Fixes #2026 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2027/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2026/comments | https://api.github.com/repos/huggingface/datasets/issues/2026/events | https://github.com/huggingface/datasets/issues/2026 | 828,194,467 | MDU6SXNzdWU4MjgxOTQ0Njc= | 2,026 | KeyError on using map after renaming a column | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 3 | "2021-03-10T18:54:17Z" | "2021-03-11T14:39:34Z" | "2021-03-11T14:38:40Z" | CONTRIBUTOR | null | null | null | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2026/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2025/comments | https://api.github.com/repos/huggingface/datasets/issues/2025/events | https://github.com/huggingface/datasets/pull/2025 | 828,047,476 | MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz | 2,025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 16 | "2021-03-10T17:00:47Z" | "2021-03-30T14:46:53Z" | "2021-03-26T16:51:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2025.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2025",
"merged_at": "2021-03-26T16:51:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2025.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2025"
} | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2025/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2024/comments | https://api.github.com/repos/huggingface/datasets/issues/2024/events | https://github.com/huggingface/datasets/pull/2024 | 827,842,962 | MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy | 2,024 | Remove print statement from mnist.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 1 | "2021-03-10T14:39:58Z" | "2021-03-11T18:03:52Z" | "2021-03-11T18:03:51Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2024",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2024"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2024/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2023/comments | https://api.github.com/repos/huggingface/datasets/issues/2023/events | https://github.com/huggingface/datasets/pull/2023 | 827,819,608 | MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2 | 2,023 | Add Romanian to XQuAD | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | 4 | "2021-03-10T14:24:32Z" | "2021-03-15T10:08:17Z" | "2021-03-15T10:08:17Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2023",
"merged_at": "2021-03-15T10:08:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2023"
} | On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2023/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2022/comments | https://api.github.com/repos/huggingface/datasets/issues/2022/events | https://github.com/huggingface/datasets/issues/2022 | 827,435,033 | MDU6SXNzdWU4Mjc0MzUwMzM= | 2,022 | ValueError when rename_column on splitted dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonschoe",
"id": 53626067,
"login": "simonschoe",
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonschoe"
} | [] | closed | false | null | [] | null | 2 | "2021-03-10T09:40:38Z" | "2021-03-16T14:06:08Z" | "2021-03-16T14:05:05Z" | NONE | null | null | null | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_dataset(
path='csv', # use 'text' loading script to load from local txt-files
delimiter='\t', # xxx
data_files=text_files, # list of paths to local text files
split=split, # xxx
)
dataset
```
Part of output:
```python
DatasetDict({
train: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 900
})
test: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 100
})
})
```
Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:
```python
dataset['train'].rename_column('sentence', 'text')
```
```python
/usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name)
353 for split_name in split_names_from_instruction:
354 if not re.match(_split_re, split_name):
--> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.")
356
357 def __str__(self):
ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('.
```
In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.
Thanks in advance! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2022/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2021/comments | https://api.github.com/repos/huggingface/datasets/issues/2021/events | https://github.com/huggingface/datasets/issues/2021 | 826,988,016 | MDU6SXNzdWU4MjY5ODgwMTY= | 2,021 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 1 | "2021-03-10T02:48:34Z" | "2021-03-13T10:07:41Z" | "2021-03-13T10:07:41Z" | NONE | null | null | null | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a serious issue with cashing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2021/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2020/comments | https://api.github.com/repos/huggingface/datasets/issues/2020/events | https://github.com/huggingface/datasets/pull/2020 | 826,961,126 | MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx | 2,020 | Remove unnecessary docstart check in conll-like datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-03-10T02:20:16Z" | "2021-03-11T13:33:37Z" | "2021-03-11T13:33:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2020",
"merged_at": "2021-03-11T13:33:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2020"
} | Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2020/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2019/comments | https://api.github.com/repos/huggingface/datasets/issues/2019/events | https://github.com/huggingface/datasets/pull/2019 | 826,625,706 | MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy | 2,019 | Replace print with logging in dataset scripts | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2021-03-09T20:59:34Z" | "2021-03-12T10:09:01Z" | "2021-03-11T16:14:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2019",
"merged_at": "2021-03-11T16:14:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2019"
} | Replaces `print(...)` in the dataset scripts with the library logger. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2019/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2018/comments | https://api.github.com/repos/huggingface/datasets/issues/2018/events | https://github.com/huggingface/datasets/pull/2018 | 826,473,764 | MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz | 2,018 | Md gender card update | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [] | closed | false | null | [] | null | 3 | "2021-03-09T18:57:20Z" | "2021-03-12T17:31:00Z" | "2021-03-12T17:31:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2018",
"merged_at": "2021-03-12T17:31:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2018"
} | I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2018/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2017/comments | https://api.github.com/repos/huggingface/datasets/issues/2017/events | https://github.com/huggingface/datasets/pull/2017 | 826,428,578 | MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2 | 2,017 | Add TF-based Features to handle different modes of data | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T18:29:52Z" | "2021-03-17T12:32:08Z" | "2021-03-17T12:32:07Z" | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2017.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2017",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2017.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2017"
} | Hi,
I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2017/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2016/comments | https://api.github.com/repos/huggingface/datasets/issues/2016/events | https://github.com/huggingface/datasets/pull/2016 | 825,965,493 | MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz | 2,016 | Not all languages have 2 digit codes. | {
"avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4",
"events_url": "https://api.github.com/users/asiddhant/events{/privacy}",
"followers_url": "https://api.github.com/users/asiddhant/followers",
"following_url": "https://api.github.com/users/asiddhant/following{/other_user}",
"gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/asiddhant",
"id": 13891775,
"login": "asiddhant",
"node_id": "MDQ6VXNlcjEzODkxNzc1",
"organizations_url": "https://api.github.com/users/asiddhant/orgs",
"received_events_url": "https://api.github.com/users/asiddhant/received_events",
"repos_url": "https://api.github.com/users/asiddhant/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions",
"type": "User",
"url": "https://api.github.com/users/asiddhant"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T13:53:39Z" | "2021-03-11T18:01:03Z" | "2021-03-11T18:01:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2016",
"merged_at": "2021-03-11T18:01:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2016"
} | . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2016/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2015/comments | https://api.github.com/repos/huggingface/datasets/issues/2015/events | https://github.com/huggingface/datasets/pull/2015 | 825,942,108 | MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0 | 2,015 | Fix ipython function creation in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T13:36:59Z" | "2021-03-09T14:06:04Z" | "2021-03-09T14:06:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"merged_at": "2021-03-09T14:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015"
} | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2015/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2014/comments | https://api.github.com/repos/huggingface/datasets/issues/2014/events | https://github.com/huggingface/datasets/pull/2014 | 825,916,531 | MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3 | 2,014 | more explicit method parameters | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T13:18:29Z" | "2021-03-10T10:08:37Z" | "2021-03-10T10:08:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2014",
"merged_at": "2021-03-10T10:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2014"
} | re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2014/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2013/comments | https://api.github.com/repos/huggingface/datasets/issues/2013/events | https://github.com/huggingface/datasets/pull/2013 | 825,694,305 | MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx | 2,013 | Add Cryptonite dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T10:32:11Z" | "2021-03-09T19:27:07Z" | "2021-03-09T19:27:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2013.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2013",
"merged_at": "2021-03-09T19:27:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2013.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2013"
} | cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2013/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2012/comments | https://api.github.com/repos/huggingface/datasets/issues/2012/events | https://github.com/huggingface/datasets/issues/2012 | 825,634,064 | MDU6SXNzdWU4MjU2MzQwNjQ= | 2,012 | No upstream branch | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 2 | "2021-03-09T09:48:55Z" | "2021-03-09T11:33:31Z" | "2021-03-09T11:33:31Z" | CONTRIBUTOR | null | null | null | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2012/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2011/comments | https://api.github.com/repos/huggingface/datasets/issues/2011/events | https://github.com/huggingface/datasets/pull/2011 | 825,621,952 | MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx | 2,011 | Add RoSent Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 0 | "2021-03-09T09:40:08Z" | "2021-03-11T18:00:52Z" | "2021-03-11T18:00:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2011.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2011",
"merged_at": "2021-03-11T18:00:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2011.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2011"
} | This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.
I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.
Let me know in case of any issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2011/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2010/comments | https://api.github.com/repos/huggingface/datasets/issues/2010/events | https://github.com/huggingface/datasets/issues/2010 | 825,567,635 | MDU6SXNzdWU4MjU1Njc2MzU= | 2,010 | Local testing fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2021-03-09T09:01:38Z" | "2021-03-09T14:06:03Z" | "2021-03-09T14:06:03Z" | CONTRIBUTOR | null | null | null | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes)
1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04)
```
Seems like a discrepancy with CI, perhaps a lib version that's not controlled?
Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2010/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2009/comments | https://api.github.com/repos/huggingface/datasets/issues/2009/events | https://github.com/huggingface/datasets/issues/2009 | 825,541,366 | MDU6SXNzdWU4MjU1NDEzNjY= | 2,009 | Ambiguous documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
] | null | 2 | "2021-03-09T08:42:11Z" | "2021-03-12T15:01:34Z" | "2021-03-12T15:01:34Z" | CONTRIBUTOR | null | null | null | https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR with a clearer statement when I understand the meaning. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2009/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2008/comments | https://api.github.com/repos/huggingface/datasets/issues/2008/events | https://github.com/huggingface/datasets/pull/2008 | 825,153,804 | MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4 | 2,008 | Fix various typos/grammer in the docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2021-03-09T01:39:28Z" | "2021-03-15T18:42:49Z" | "2021-03-09T10:21:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2008",
"merged_at": "2021-03-09T10:21:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2008"
} | This PR:
* fixes various typos/grammer I came across while reading the docs
* adds the "Install with conda" installation instructions
Closes #1959 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2008/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2007/comments | https://api.github.com/repos/huggingface/datasets/issues/2007/events | https://github.com/huggingface/datasets/issues/2007 | 824,518,158 | MDU6SXNzdWU4MjQ1MTgxNTg= | 2,007 | How to not load huggingface datasets into memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 2 | "2021-03-08T12:35:26Z" | "2021-08-04T18:02:25Z" | "2021-08-04T18:02:25Z" | NONE | null | null | null | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2007/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2006/comments | https://api.github.com/repos/huggingface/datasets/issues/2006/events | https://github.com/huggingface/datasets/pull/2006 | 824,457,794 | MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2 | 2,006 | Don't gitignore dvc.lock | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-08T11:13:08Z" | "2021-03-08T11:28:35Z" | "2021-03-08T11:28:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2006.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2006",
"merged_at": "2021-03-08T11:28:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2006.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2006"
} | The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of
```
ERROR: 'dvc.lock' is git-ignored.
```
I removed the dvc.lock file from the gitignore to fix that | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2006/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2005/comments | https://api.github.com/repos/huggingface/datasets/issues/2005/events | https://github.com/huggingface/datasets/issues/2005 | 824,275,035 | MDU6SXNzdWU4MjQyNzUwMzU= | 2,005 | Setting to torch format not working with torchvision and MNIST | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 9 | "2021-03-08T07:38:11Z" | "2021-03-09T17:58:13Z" | "2021-03-09T17:58:13Z" | CONTRIBUTOR | null | null | null | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labels = []
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(
np.array(examples["image"][example_idx], dtype=np.uint8)
))
else:
images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8)))
labels.append(torch.tensor(examples["label"][example_idx]))
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('mnist')
train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000)
train_dataset.set_format("torch",columns=["image","label"])
```
After this, I check the type of the following:
```python
print(type(train_dataset["train"]["label"]))
print(type(train_dataset["train"]["image"][0]))
```
This leads to the following output:
```python
<class 'torch.Tensor'>
<class 'list'>
```
I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`.
I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue?
Thanks,
Gunjan
EDIT:
I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28).
EDIT 2:
Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2005/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2004/comments | https://api.github.com/repos/huggingface/datasets/issues/2004/events | https://github.com/huggingface/datasets/pull/2004 | 824,080,760 | MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1 | 2,004 | LaRoSeDa | {
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MihaelaGaman",
"id": 6823177,
"login": "MihaelaGaman",
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MihaelaGaman"
} | [] | closed | false | null | [] | null | 1 | "2021-03-08T01:06:32Z" | "2021-03-17T10:43:20Z" | "2021-03-17T10:43:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2004",
"merged_at": "2021-03-17T10:43:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2004"
} | Add LaRoSeDa to huggingface datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2004/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2003/comments | https://api.github.com/repos/huggingface/datasets/issues/2003/events | https://github.com/huggingface/datasets/issues/2003 | 824,034,678 | MDU6SXNzdWU4MjQwMzQ2Nzg= | 2,003 | Messages are being printed to the `stdout` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4",
"events_url": "https://api.github.com/users/mahnerak/events{/privacy}",
"followers_url": "https://api.github.com/users/mahnerak/followers",
"following_url": "https://api.github.com/users/mahnerak/following{/other_user}",
"gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mahnerak",
"id": 1367529,
"login": "mahnerak",
"node_id": "MDQ6VXNlcjEzNjc1Mjk=",
"organizations_url": "https://api.github.com/users/mahnerak/orgs",
"received_events_url": "https://api.github.com/users/mahnerak/received_events",
"repos_url": "https://api.github.com/users/mahnerak/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mahnerak"
} | [] | closed | false | null | [] | null | 3 | "2021-03-07T22:09:34Z" | "2023-07-25T16:35:21Z" | "2023-07-25T16:35:21Z" | NONE | null | null | null | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`.
In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2003/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2002/comments | https://api.github.com/repos/huggingface/datasets/issues/2002/events | https://github.com/huggingface/datasets/pull/2002 | 823,955,744 | MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3 | 2,002 | MOROCO | {
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MihaelaGaman",
"id": 6823177,
"login": "MihaelaGaman",
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MihaelaGaman"
} | [] | closed | false | null | [] | null | 1 | "2021-03-07T16:22:17Z" | "2021-03-19T09:52:06Z" | "2021-03-19T09:52:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2002",
"merged_at": "2021-03-19T09:52:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2002"
} | Add MOROCO to huggingface datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2002/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2001/comments | https://api.github.com/repos/huggingface/datasets/issues/2001/events | https://github.com/huggingface/datasets/issues/2001 | 823,946,706 | MDU6SXNzdWU4MjM5NDY3MDY= | 2,001 | Empty evidence document ("provenance") in KILT ELI5 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donggyukimc",
"id": 16605764,
"login": "donggyukimc",
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donggyukimc"
} | [] | closed | false | null | [] | null | 1 | "2021-03-07T15:41:35Z" | "2022-12-19T19:25:14Z" | "2021-03-17T05:51:01Z" | NONE | null | null | null | In the original KILT benchmark(https://github.com/facebookresearch/KILT),
all samples has its evidence document (i.e. wikipedia page id) for prediction.
For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this
`{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}`
However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance.
`{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]}
`
should i perform other procedure to obtain evidence documents? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2001/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2000/comments | https://api.github.com/repos/huggingface/datasets/issues/2000/events | https://github.com/huggingface/datasets/issues/2000 | 823,899,910 | MDU6SXNzdWU4MjM4OTk5MTA= | 2,000 | Windows Permission Error (most recent version of datasets) | {
"avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4",
"events_url": "https://api.github.com/users/itsLuisa/events{/privacy}",
"followers_url": "https://api.github.com/users/itsLuisa/followers",
"following_url": "https://api.github.com/users/itsLuisa/following{/other_user}",
"gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/itsLuisa",
"id": 73881148,
"login": "itsLuisa",
"node_id": "MDQ6VXNlcjczODgxMTQ4",
"organizations_url": "https://api.github.com/users/itsLuisa/orgs",
"received_events_url": "https://api.github.com/users/itsLuisa/received_events",
"repos_url": "https://api.github.com/users/itsLuisa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/itsLuisa"
} | [] | closed | false | null | [] | null | 5 | "2021-03-07T11:55:28Z" | "2021-03-09T12:42:57Z" | "2021-03-09T12:42:57Z" | NONE | null | null | null | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance!
Luisa
My script:
```
import datasets
import csv
logger = datasets.logging.get_logger(__name__)
class SampleConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(SampleConfig, self).__init__(**kwargs)
class Sample(datasets.GeneratorBasedBuilder):
BUILDER_CONFIGS = [
SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"),
]
def _info(self):
return datasets.DatasetInfo(
description="Dataset with words and their POS-Tags",
features=datasets.Features(
{
"id": datasets.Value("string"),
"tokens": datasets.Sequence(datasets.Value("string")),
"pos_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=[
"''",
",",
"-LRB-",
"-RRB-",
".",
":",
"CC",
"CD",
"DT",
"EX",
"FW",
"HYPH",
"IN",
"JJ",
"JJR",
"JJS",
"MD",
"NN",
"NNP",
"NNPS",
"NNS",
"PDT",
"POS",
"PRP",
"PRP$",
"RB",
"RBR",
"RBS",
"RP",
"TO",
"UH",
"VB",
"VBD",
"VBG",
"VBN",
"VBP",
"VBZ",
"WDT",
"WP",
"WRB",
"``"
]
)
),
}
),
supervised_keys=None,
homepage="https://catalog.ldc.upenn.edu/LDC2011T03",
citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.",
)
def _split_generators(self, dl_manager):
loaded_files = dl_manager.download_and_extract(self.config.data_files)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}),
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]})
]
def _generate_examples(self, filepath):
logger.info("generating examples from = %s", filepath)
with open(filepath, encoding="cp1252") as f:
data = csv.reader(f, delimiter="\t")
ids = list()
tokens = list()
pos_tags = list()
for id_, line in enumerate(data):
#print(line)
if len(line) == 1:
if tokens:
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
ids = list()
tokens = list()
pos_tags = list()
else:
ids.append(line[0])
tokens.append(line[1])
pos_tags.append(line[2])
# last example
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
def main():
dataset = datasets.load_dataset(
"data_loading.py", data_files={
"train": "train.tsv",
"test": "test.tsv",
"val": "val.tsv"
}
)
#print(dataset)
if __name__=="__main__":
main()
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2000/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1999/comments | https://api.github.com/repos/huggingface/datasets/issues/1999/events | https://github.com/huggingface/datasets/pull/1999 | 823,753,591 | MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy | 1,999 | Add FashionMNIST dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 1 | "2021-03-06T21:36:57Z" | "2021-03-09T09:52:11Z" | "2021-03-09T09:52:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1999",
"merged_at": "2021-03-09T09:52:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1999"
} | This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1999/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1998/comments | https://api.github.com/repos/huggingface/datasets/issues/1998/events | https://github.com/huggingface/datasets/pull/1998 | 823,723,960 | MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4 | 1,998 | Add -DOCSTART- note to dataset card of conll-like datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 1 | "2021-03-06T19:08:29Z" | "2021-03-11T02:20:07Z" | "2021-03-11T02:20:07Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1998.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1998",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1998.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1998"
} | Closes #1983 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1998/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1997/comments | https://api.github.com/repos/huggingface/datasets/issues/1997/events | https://github.com/huggingface/datasets/issues/1997 | 823,679,465 | MDU6SXNzdWU4MjM2Nzk0NjU= | 1,997 | from datasets import MoleculeDataset, GEOMDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4",
"events_url": "https://api.github.com/users/futianfan/events{/privacy}",
"followers_url": "https://api.github.com/users/futianfan/followers",
"following_url": "https://api.github.com/users/futianfan/following{/other_user}",
"gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/futianfan",
"id": 5087210,
"login": "futianfan",
"node_id": "MDQ6VXNlcjUwODcyMTA=",
"organizations_url": "https://api.github.com/users/futianfan/orgs",
"received_events_url": "https://api.github.com/users/futianfan/received_events",
"repos_url": "https://api.github.com/users/futianfan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/futianfan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/futianfan"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 0 | "2021-03-06T15:50:19Z" | "2021-03-06T16:13:26Z" | "2021-03-06T16:13:26Z" | NONE | null | null | null | I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1997/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1996/comments | https://api.github.com/repos/huggingface/datasets/issues/1996/events | https://github.com/huggingface/datasets/issues/1996 | 823,573,410 | MDU6SXNzdWU4MjM1NzM0MTA= | 1,996 | Error when exploring `arabic_speech_corpus` | {
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/elgeish",
"id": 6879673,
"login": "elgeish",
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"repos_url": "https://api.github.com/users/elgeish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/elgeish"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | 3 | "2021-03-06T05:55:20Z" | "2022-10-05T13:24:26Z" | "2022-10-05T13:24:26Z" | NONE | null | null | null | Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 233, in <module>
configs = get_confs(option)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs
module_path = nlp.load.prepare_module(path, dataset=True
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module
f"To be able to use this {module_type}, you need to install the following dependencies"
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1996/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1995/comments | https://api.github.com/repos/huggingface/datasets/issues/1995/events | https://github.com/huggingface/datasets/pull/1995 | 822,878,431 | MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0 | 1,995 | [Timit_asr] Make sure not only the first sample is used | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 4 | "2021-03-05T08:42:51Z" | "2021-06-30T06:25:53Z" | "2021-03-05T08:58:59Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1995",
"merged_at": "2021-03-05T08:58:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1995"
} | When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded. | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1995/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1994/comments | https://api.github.com/repos/huggingface/datasets/issues/1994/events | https://github.com/huggingface/datasets/issues/1994 | 822,871,238 | MDU6SXNzdWU4MjI4NzEyMzg= | 1,994 | not being able to get wikipedia es language | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | 8 | "2021-03-05T08:31:48Z" | "2021-03-11T20:46:21Z" | null | NONE | null | null | null | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset
ignore_verifications=ignore_verifications,
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare
"\n\t`{}`".format(usage_example)
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
thanks @lhoestq for any suggestion/help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1994/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1993/comments | https://api.github.com/repos/huggingface/datasets/issues/1993/events | https://github.com/huggingface/datasets/issues/1993 | 822,758,387 | MDU6SXNzdWU4MjI3NTgzODc= | 1,993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 7 | "2021-03-05T05:25:50Z" | "2021-03-22T04:05:50Z" | "2021-03-22T04:05:50Z" | NONE | null | null | null | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1993/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1992/comments | https://api.github.com/repos/huggingface/datasets/issues/1992/events | https://github.com/huggingface/datasets/issues/1992 | 822,672,238 | MDU6SXNzdWU4MjI2NzIyMzg= | 1,992 | `datasets.map` multi processing much slower than single processing | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hwijeen",
"id": 29157715,
"login": "hwijeen",
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hwijeen"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | 13 | "2021-03-05T02:10:02Z" | "2023-06-08T12:31:55Z" | null | NONE | null | null | null | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer.
I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running.
What could be the reason? I would be happy to provide information necessary to spot the reason.
p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing.
p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work.

| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1992/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1991/comments | https://api.github.com/repos/huggingface/datasets/issues/1991/events | https://github.com/huggingface/datasets/pull/1991 | 822,554,473 | MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx | 1,991 | Adding the conllpp dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZihanWangKi",
"id": 21319243,
"login": "ZihanWangKi",
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZihanWangKi"
} | [] | closed | false | null | [] | null | 1 | "2021-03-04T22:19:43Z" | "2021-03-17T10:37:39Z" | "2021-03-17T10:37:39Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1991.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1991",
"merged_at": "2021-03-17T10:37:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1991.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1991"
} | Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1991/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1990/comments | https://api.github.com/repos/huggingface/datasets/issues/1990/events | https://github.com/huggingface/datasets/issues/1990 | 822,384,502 | MDU6SXNzdWU4MjIzODQ1MDI= | 1,990 | OSError: Memory mapping file failed: Cannot allocate memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 6 | "2021-03-04T18:21:58Z" | "2021-08-04T18:04:25Z" | "2021-08-04T18:04:25Z" | NONE | null | null | null | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128
```
I am using transformer version: 4.3.2
But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset?
Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions:
```
File "run_mlm.py", line 441, in <module>
main()
File "run_mlm.py", line 233, in main
split=f"train[{data_args.validation_split_percentage}%:]",
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset
map_tuple=True,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table
stream = stream_from(filename)
File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1990/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1989/comments | https://api.github.com/repos/huggingface/datasets/issues/1989/events | https://github.com/huggingface/datasets/issues/1989 | 822,328,147 | MDU6SXNzdWU4MjIzMjgxNDc= | 1,989 | Question/problem with dataset labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [] | closed | false | null | [] | null | 10 | "2021-03-04T17:06:53Z" | "2023-07-24T14:39:33Z" | "2023-07-24T14:39:33Z" | NONE | null | null | null | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module>
main()
File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main
datasets = load_dataset("csv", data_files=data_files)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split
writer.write_table(table)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table
pa_table = pa_table.cast(self._schema)
File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast
File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse
```
Any ideas how to fix this? For now, I'll probably make them numeric. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1989/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1988/comments | https://api.github.com/repos/huggingface/datasets/issues/1988/events | https://github.com/huggingface/datasets/issues/1988 | 822,324,605 | MDU6SXNzdWU4MjIzMjQ2MDU= | 1,988 | Readme.md is misleading about kinds of datasets? | {
"avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4",
"events_url": "https://api.github.com/users/surak/events{/privacy}",
"followers_url": "https://api.github.com/users/surak/followers",
"following_url": "https://api.github.com/users/surak/following{/other_user}",
"gists_url": "https://api.github.com/users/surak/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/surak",
"id": 878399,
"login": "surak",
"node_id": "MDQ6VXNlcjg3ODM5OQ==",
"organizations_url": "https://api.github.com/users/surak/orgs",
"received_events_url": "https://api.github.com/users/surak/received_events",
"repos_url": "https://api.github.com/users/surak/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surak/subscriptions",
"type": "User",
"url": "https://api.github.com/users/surak"
} | [] | closed | false | null | [] | null | 1 | "2021-03-04T17:04:20Z" | "2021-08-04T18:05:23Z" | "2021-08-04T18:05:23Z" | NONE | null | null | null | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You mention other kinds of datasets, with images and so on. I'm confused.
Is it possible to use it to store, say, imagenet locally? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1988/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1987/comments | https://api.github.com/repos/huggingface/datasets/issues/1987/events | https://github.com/huggingface/datasets/issues/1987 | 822,308,956 | MDU6SXNzdWU4MjIzMDg5NTY= | 1,987 | wmt15 is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 1 | "2021-03-04T16:46:25Z" | "2022-10-05T13:12:26Z" | "2022-10-05T13:12:26Z" | CONTRIBUTOR | null | null | null | While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:
```
python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")'
Downloading: 2.91kB [00:00, 818kB/s]
Downloading: 3.02kB [00:00, 897kB/s]
Downloading: 41.1kB [00:00, 19.1MB/s]
Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested
mapped = [
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1987/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1986/comments | https://api.github.com/repos/huggingface/datasets/issues/1986/events | https://github.com/huggingface/datasets/issues/1986 | 822,176,290 | MDU6SXNzdWU4MjIxNzYyOTA= | 1,986 | wmt datasets fail to load | {
"avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4",
"events_url": "https://api.github.com/users/sabania/events{/privacy}",
"followers_url": "https://api.github.com/users/sabania/followers",
"following_url": "https://api.github.com/users/sabania/following{/other_user}",
"gists_url": "https://api.github.com/users/sabania/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sabania",
"id": 32322564,
"login": "sabania",
"node_id": "MDQ6VXNlcjMyMzIyNTY0",
"organizations_url": "https://api.github.com/users/sabania/orgs",
"received_events_url": "https://api.github.com/users/sabania/received_events",
"repos_url": "https://api.github.com/users/sabania/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sabania/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sabania"
} | [] | closed | false | null | [] | null | 1 | "2021-03-04T14:18:55Z" | "2021-03-04T14:31:07Z" | "2021-03-04T14:31:07Z" | NONE | null | null | null | ~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1986/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1985/comments | https://api.github.com/repos/huggingface/datasets/issues/1985/events | https://github.com/huggingface/datasets/pull/1985 | 822,170,651 | MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw | 1,985 | Optimize int precision | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 8 | "2021-03-04T14:12:23Z" | "2021-03-22T12:04:40Z" | "2021-03-16T09:44:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1985",
"merged_at": "2021-03-16T09:44:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1985"
} | Optimize int precision to reduce dataset file size.
Close #1973, close #1825, close #861. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1985/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1984/comments | https://api.github.com/repos/huggingface/datasets/issues/1984/events | https://github.com/huggingface/datasets/issues/1984 | 821,816,588 | MDU6SXNzdWU4MjE4MTY1ODg= | 1,984 | Add tests for WMT datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 1 | "2021-03-04T06:46:42Z" | "2022-11-04T14:19:16Z" | "2022-11-04T14:19:16Z" | MEMBER | null | null | null | As requested in #1981, we need tests for WMT datasets, using dummy data. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1984/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1983/comments | https://api.github.com/repos/huggingface/datasets/issues/1983/events | https://github.com/huggingface/datasets/issues/1983 | 821,746,008 | MDU6SXNzdWU4MjE3NDYwMDg= | 1,983 | The size of CoNLL-2003 is not consistant with the official release. | {
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/h-peng17",
"id": 39556019,
"login": "h-peng17",
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/h-peng17"
} | [] | closed | false | null | [] | null | 4 | "2021-03-04T04:41:34Z" | "2022-10-05T13:13:26Z" | "2022-10-05T13:13:26Z" | NONE | null | null | null | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1983/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1982/comments | https://api.github.com/repos/huggingface/datasets/issues/1982/events | https://github.com/huggingface/datasets/pull/1982 | 821,448,791 | MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0 | 1,982 | Fix NestedDataStructure.data for empty dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 5 | "2021-03-03T20:16:51Z" | "2021-03-04T16:46:04Z" | "2021-03-03T22:48:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1982",
"merged_at": "2021-03-03T22:48:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1982"
} | Fix #1981 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1982/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1981/comments | https://api.github.com/repos/huggingface/datasets/issues/1981/events | https://github.com/huggingface/datasets/issues/1981 | 821,411,109 | MDU6SXNzdWU4MjE0MTExMDk= | 1,981 | wmt datasets fail to load | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 6 | "2021-03-03T19:21:39Z" | "2021-03-04T14:16:47Z" | "2021-03-03T22:48:36Z" | CONTRIBUTOR | null | null | null | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators
extraction_map = dict(downloaded_files, **manual_files)
```
it worked fine recently. same problem if I try wmt16.
git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa
@albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1981/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1980/comments | https://api.github.com/repos/huggingface/datasets/issues/1980/events | https://github.com/huggingface/datasets/pull/1980 | 821,312,810 | MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy | 1,980 | Loading all answers from drop | {
"avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4",
"events_url": "https://api.github.com/users/KaijuML/events{/privacy}",
"followers_url": "https://api.github.com/users/KaijuML/followers",
"following_url": "https://api.github.com/users/KaijuML/following{/other_user}",
"gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KaijuML",
"id": 25499439,
"login": "KaijuML",
"node_id": "MDQ6VXNlcjI1NDk5NDM5",
"organizations_url": "https://api.github.com/users/KaijuML/orgs",
"received_events_url": "https://api.github.com/users/KaijuML/received_events",
"repos_url": "https://api.github.com/users/KaijuML/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KaijuML"
} | [] | closed | false | null | [] | null | 2 | "2021-03-03T17:13:07Z" | "2021-03-15T11:27:26Z" | "2021-03-15T11:27:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1980.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1980",
"merged_at": "2021-03-15T11:27:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1980.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1980"
} | Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.
Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.
Let me know if there is anything else I can do,
Clément | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1980/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1979/comments | https://api.github.com/repos/huggingface/datasets/issues/1979/events | https://github.com/huggingface/datasets/pull/1979 | 820,977,853 | MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3 | 1,979 | Add article_id and process test set template for semeval 2020 task 11… | {
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hemildesai",
"id": 8195444,
"login": "hemildesai",
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hemildesai"
} | [] | closed | false | null | [] | null | 3 | "2021-03-03T10:34:32Z" | "2021-03-13T10:59:40Z" | "2021-03-12T13:10:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1979",
"merged_at": "2021-03-12T13:10:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1979"
} | … dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1979/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1978/comments | https://api.github.com/repos/huggingface/datasets/issues/1978/events | https://github.com/huggingface/datasets/pull/1978 | 820,956,806 | MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz | 1,978 | Adding ro sts dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4",
"events_url": "https://api.github.com/users/lorinczb/events{/privacy}",
"followers_url": "https://api.github.com/users/lorinczb/followers",
"following_url": "https://api.github.com/users/lorinczb/following{/other_user}",
"gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lorinczb",
"id": 36982089,
"login": "lorinczb",
"node_id": "MDQ6VXNlcjM2OTgyMDg5",
"organizations_url": "https://api.github.com/users/lorinczb/orgs",
"received_events_url": "https://api.github.com/users/lorinczb/received_events",
"repos_url": "https://api.github.com/users/lorinczb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lorinczb"
} | [] | closed | false | null | [] | null | 3 | "2021-03-03T10:08:53Z" | "2021-03-05T10:00:14Z" | "2021-03-05T09:33:55Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"merged_at": "2021-03-05T09:33:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1978"
} | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1977/comments | https://api.github.com/repos/huggingface/datasets/issues/1977/events | https://github.com/huggingface/datasets/issues/1977 | 820,312,022 | MDU6SXNzdWU4MjAzMTIwMjI= | 1,977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | 2 | "2021-03-02T19:21:28Z" | "2021-03-03T10:17:40Z" | null | NONE | null | null | null | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256
`
I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset?
Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks
I really appreciate your help.
@lhoestq
thanks.
[1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
error I get:
```
>>> import datasets
>>> datasets.load_dataset("wikipedia", "20200501.aa")
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1977/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1976/comments | https://api.github.com/repos/huggingface/datasets/issues/1976/events | https://github.com/huggingface/datasets/pull/1976 | 820,228,538 | MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4 | 1,976 | Add datasets full offline mode with HF_DATASETS_OFFLINE | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-02T17:26:59Z" | "2021-03-03T15:45:31Z" | "2021-03-03T15:45:30Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1976",
"merged_at": "2021-03-03T15:45:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1976"
} | Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1976/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1975/comments | https://api.github.com/repos/huggingface/datasets/issues/1975/events | https://github.com/huggingface/datasets/pull/1975 | 820,205,485 | MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3 | 1,975 | Fix flake8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 0 | "2021-03-02T16:59:13Z" | "2021-03-04T10:43:22Z" | "2021-03-04T10:43:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1975.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1975",
"merged_at": "2021-03-04T10:43:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1975.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1975"
} | Fix flake8 style. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1975/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1974/comments | https://api.github.com/repos/huggingface/datasets/issues/1974/events | https://github.com/huggingface/datasets/pull/1974 | 820,122,223 | MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0 | 1,974 | feat(docs): navigate with left/right arrow keys | {
"avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4",
"events_url": "https://api.github.com/users/ydcjeff/events{/privacy}",
"followers_url": "https://api.github.com/users/ydcjeff/followers",
"following_url": "https://api.github.com/users/ydcjeff/following{/other_user}",
"gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ydcjeff",
"id": 32727188,
"login": "ydcjeff",
"node_id": "MDQ6VXNlcjMyNzI3MTg4",
"organizations_url": "https://api.github.com/users/ydcjeff/orgs",
"received_events_url": "https://api.github.com/users/ydcjeff/received_events",
"repos_url": "https://api.github.com/users/ydcjeff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ydcjeff"
} | [] | closed | false | null | [] | null | 0 | "2021-03-02T15:24:50Z" | "2021-03-04T10:44:12Z" | "2021-03-04T10:42:48Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1974",
"merged_at": "2021-03-04T10:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1974"
} | Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1974/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1973/comments | https://api.github.com/repos/huggingface/datasets/issues/1973/events | https://github.com/huggingface/datasets/issues/1973 | 820,077,312 | MDU6SXNzdWU4MjAwNzczMTI= | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 8 | "2021-03-02T14:35:53Z" | "2021-03-30T14:03:59Z" | "2021-03-16T09:44:00Z" | NONE | null | null | null | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1973/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1972/comments | https://api.github.com/repos/huggingface/datasets/issues/1972/events | https://github.com/huggingface/datasets/issues/1972 | 819,752,761 | MDU6SXNzdWU4MTk3NTI3NjE= | 1,972 | 'Dataset' object has no attribute 'rename_column' | {
"avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4",
"events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}",
"followers_url": "https://api.github.com/users/farooqzaman1/followers",
"following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}",
"gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/farooqzaman1",
"id": 23195502,
"login": "farooqzaman1",
"node_id": "MDQ6VXNlcjIzMTk1NTAy",
"organizations_url": "https://api.github.com/users/farooqzaman1/orgs",
"received_events_url": "https://api.github.com/users/farooqzaman1/received_events",
"repos_url": "https://api.github.com/users/farooqzaman1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/farooqzaman1"
} | [] | closed | false | null | [] | null | 1 | "2021-03-02T08:01:49Z" | "2022-06-01T16:08:47Z" | "2022-06-01T16:08:47Z" | NONE | null | null | null | 'Dataset' object has no attribute 'rename_column' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1972/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1971/comments | https://api.github.com/repos/huggingface/datasets/issues/1971/events | https://github.com/huggingface/datasets/pull/1971 | 819,714,231 | MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0 | 1,971 | Fix ArrowWriter closes stream at exit | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 7 | "2021-03-02T07:12:34Z" | "2021-03-10T16:36:57Z" | "2021-03-10T16:36:57Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1971",
"merged_at": "2021-03-10T16:36:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1971"
} | Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method.
Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1971/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1970/comments | https://api.github.com/repos/huggingface/datasets/issues/1970/events | https://github.com/huggingface/datasets/pull/1970 | 819,500,620 | MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw | 1,970 | Fixing the URL filtering for bad MLSUM examples in GEM | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 0 | "2021-03-02T01:22:58Z" | "2021-03-02T03:19:06Z" | "2021-03-02T02:01:33Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1970.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1970",
"merged_at": "2021-03-02T02:01:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1970.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1970"
} | This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r
cc @sebastianGehrmann | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1970/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1967/comments | https://api.github.com/repos/huggingface/datasets/issues/1967/events | https://github.com/huggingface/datasets/pull/1967 | 819,129,568 | MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx | 1,967 | Add Turkish News Category Dataset - 270K - Lite Version | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
} | [] | closed | false | null | [] | null | 1 | "2021-03-01T18:21:59Z" | "2021-03-02T17:25:00Z" | "2021-03-02T17:25:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1967",
"merged_at": "2021-03-02T17:25:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1967"
} | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1967/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1966/comments | https://api.github.com/repos/huggingface/datasets/issues/1966/events | https://github.com/huggingface/datasets/pull/1966 | 819,101,253 | MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0 | 1,966 | Fix metrics collision in separate multiprocessed experiments | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2021-03-01T17:45:18Z" | "2021-03-02T13:05:45Z" | "2021-03-02T13:05:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"merged_at": "2021-03-02T13:05:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966"
} | As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1966/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1965/comments | https://api.github.com/repos/huggingface/datasets/issues/1965/events | https://github.com/huggingface/datasets/issues/1965 | 818,833,460 | MDU6SXNzdWU4MTg4MzM0NjA= | 1,965 | Can we parallelized the add_faiss_index process over dataset shards ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 3 | "2021-03-01T12:47:34Z" | "2021-03-04T19:40:56Z" | "2021-03-04T19:40:42Z" | NONE | null | null | null | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1965/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1964/comments | https://api.github.com/repos/huggingface/datasets/issues/1964/events | https://github.com/huggingface/datasets/issues/1964 | 818,624,864 | MDU6SXNzdWU4MTg2MjQ4NjQ= | 1,964 | Datasets.py function load_dataset does not match squad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LeopoldACC",
"id": 44536699,
"login": "LeopoldACC",
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LeopoldACC"
} | [] | closed | false | null | [] | null | 6 | "2021-03-01T08:41:31Z" | "2022-10-05T13:09:47Z" | "2022-10-05T13:09:47Z" | NONE | null | null | null | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
the bug is that:
```
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 217, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json)
,is the problem plain_text do not have a checksum?
### 2 When I try to train lxmert,and use local dataset:
```
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
The bug is that
```
['title', 'paragraphs']
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 273, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:
```
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
print(datasets["train"].column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
## Please tell me how to fix the bug,thks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1964/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1963/comments | https://api.github.com/repos/huggingface/datasets/issues/1963/events | https://github.com/huggingface/datasets/issues/1963 | 818,289,967 | MDU6SXNzdWU4MTgyODk5Njc= | 1,963 | bug in SNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 1 | "2021-02-28T19:36:20Z" | "2022-10-05T13:13:46Z" | "2022-10-05T13:13:46Z" | NONE | null | null | null | Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datasets used:
`datasets 1.2.1 <pip>
`
thanks for your help. @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1963/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1962/comments | https://api.github.com/repos/huggingface/datasets/issues/1962/events | https://github.com/huggingface/datasets/pull/1962 | 818,089,156 | MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4 | 1,962 | Fix unused arguments | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2021-02-28T02:47:07Z" | "2021-03-11T02:18:17Z" | "2021-03-03T16:37:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"merged_at": "2021-03-03T16:37:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962"
} | Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1962/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1961/comments | https://api.github.com/repos/huggingface/datasets/issues/1961/events | https://github.com/huggingface/datasets/pull/1961 | 818,077,947 | MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0 | 1,961 | Add sst dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | 0 | "2021-02-28T02:08:29Z" | "2021-03-04T10:38:53Z" | "2021-03-04T10:38:53Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1961.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1961",
"merged_at": "2021-03-04T10:38:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1961.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1961"
} | Related to #1934—Add the Stanford Sentiment Treebank dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1961/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1960/comments | https://api.github.com/repos/huggingface/datasets/issues/1960/events | https://github.com/huggingface/datasets/pull/1960 | 818,073,154 | MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4 | 1,960 | Allow stateful function in dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2021-02-28T01:29:05Z" | "2021-03-23T15:26:49Z" | "2021-03-23T15:26:49Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1960.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1960",
"merged_at": "2021-03-23T15:26:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1960.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1960"
} | Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.
Fixes #1940
@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1960/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1959/comments | https://api.github.com/repos/huggingface/datasets/issues/1959/events | https://github.com/huggingface/datasets/issues/1959 | 818,055,644 | MDU6SXNzdWU4MTgwNTU2NDQ= | 1,959 | Bug in skip_rows argument of load_dataset function ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4",
"events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}",
"followers_url": "https://api.github.com/users/LedaguenelArthur/followers",
"following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}",
"gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LedaguenelArthur",
"id": 73159756,
"login": "LedaguenelArthur",
"node_id": "MDQ6VXNlcjczMTU5NzU2",
"organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs",
"received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events",
"repos_url": "https://api.github.com/users/LedaguenelArthur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LedaguenelArthur"
} | [] | closed | false | null | [] | null | 1 | "2021-02-27T23:32:54Z" | "2021-03-09T10:21:32Z" | "2021-03-09T10:21:32Z" | NONE | null | null | null | Hello everyone,
I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/
I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names
`test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\t', column_names=["id", "sentence", "label"], skip_rows=1)`
But I got the following error message
`__init__() got an unexpected keyword argument 'skip_rows'`
Have I used the wrong argument ? Am I missing something or is this a bug ?
Thank you very much for your time,
Best regards,
Arthur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1959/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1958/comments | https://api.github.com/repos/huggingface/datasets/issues/1958/events | https://github.com/huggingface/datasets/issues/1958 | 818,037,548 | MDU6SXNzdWU4MTgwMzc1NDg= | 1,958 | XSum dataset download link broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/1156974?v=4",
"events_url": "https://api.github.com/users/himat/events{/privacy}",
"followers_url": "https://api.github.com/users/himat/followers",
"following_url": "https://api.github.com/users/himat/following{/other_user}",
"gists_url": "https://api.github.com/users/himat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/himat",
"id": 1156974,
"login": "himat",
"node_id": "MDQ6VXNlcjExNTY5NzQ=",
"organizations_url": "https://api.github.com/users/himat/orgs",
"received_events_url": "https://api.github.com/users/himat/received_events",
"repos_url": "https://api.github.com/users/himat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/himat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/himat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/himat"
} | [] | closed | false | null | [] | null | 1 | "2021-02-27T21:47:56Z" | "2021-02-27T21:50:16Z" | "2021-02-27T21:50:16Z" | NONE | null | null | null | I did
```
from datasets import load_dataset
dataset = load_dataset("xsum")
```
This returns
`ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1958/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1956/comments | https://api.github.com/repos/huggingface/datasets/issues/1956/events | https://github.com/huggingface/datasets/issues/1956 | 818,013,741 | MDU6SXNzdWU4MTgwMTM3NDE= | 1,956 | [distributed env] potentially unsafe parallel execution | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 2 | "2021-02-27T20:38:45Z" | "2021-03-01T17:24:42Z" | "2021-03-01T17:24:42Z" | CONTRIBUTOR | null | null | null | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason).
That's why dist environments use some unique to a group identifier so that each group is dealt with separately.
e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT`
So ideally this interface should ask for a shared secret to do the right thing.
I'm not reporting an immediate need, but am only flagging that this will hit someone down the road.
This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment.
Thank you | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1956/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1955/comments | https://api.github.com/repos/huggingface/datasets/issues/1955/events | https://github.com/huggingface/datasets/pull/1955 | 818,010,664 | MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5 | 1,955 | typos + grammar | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 0 | "2021-02-27T20:21:43Z" | "2021-03-01T17:20:38Z" | "2021-03-01T14:43:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1955.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1955",
"merged_at": "2021-03-01T14:43:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1955.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1955"
} | This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability.
N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1955/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1954/comments | https://api.github.com/repos/huggingface/datasets/issues/1954/events | https://github.com/huggingface/datasets/issues/1954 | 817,565,563 | MDU6SXNzdWU4MTc1NjU1NjM= | 1,954 | add a new column | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 2 | "2021-02-26T18:17:27Z" | "2021-04-29T14:50:43Z" | "2021-04-29T14:50:43Z" | NONE | null | null | null | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1954/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1953/comments | https://api.github.com/repos/huggingface/datasets/issues/1953/events | https://github.com/huggingface/datasets/pull/1953 | 817,498,869 | MDExOlB1bGxSZXF1ZXN0NTgwOTgyMDMz | 1,953 | Documentation for to_csv, to_pandas and to_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-02-26T16:35:49Z" | "2021-03-01T14:03:48Z" | "2021-03-01T14:03:47Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1953.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1953",
"merged_at": "2021-03-01T14:03:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1953.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1953"
} | I added these methods to the documentation with a small paragraph.
I also fixed some formatting issues in the docstrings | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1953/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1952/comments | https://api.github.com/repos/huggingface/datasets/issues/1952/events | https://github.com/huggingface/datasets/pull/1952 | 817,428,160 | MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw | 1,952 | Handle timeouts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2021-02-26T15:02:07Z" | "2021-03-01T14:29:24Z" | "2021-03-01T14:29:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1952",
"merged_at": "2021-03-01T14:29:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1952"
} | As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset.
This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00
I added a default timeout, and included an option to our offline environment for tests to be able to simulate both connection errors and timeout errors (previously it was simulating connection errors only).
Now networks calls don't hang indefinitely.
The default timeout is set to 10sec (we might reduce it). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1952/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1951/comments | https://api.github.com/repos/huggingface/datasets/issues/1951/events | https://github.com/huggingface/datasets/pull/1951 | 817,423,573 | MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2 | 1,951 | Add cross-platform support for datasets-cli | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 1 | "2021-02-26T14:56:25Z" | "2021-03-11T02:18:26Z" | "2021-02-26T15:30:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1951",
"merged_at": "2021-02-26T15:30:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1951"
} | One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src/datasets/commands/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https://github.com/huggingface/transformers/pull/4131)). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1951/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1950/comments | https://api.github.com/repos/huggingface/datasets/issues/1950/events | https://github.com/huggingface/datasets/pull/1950 | 817,295,235 | MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz | 1,950 | updated multi_nli dataset with missing fields | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 0 | "2021-02-26T11:54:36Z" | "2021-03-01T11:08:30Z" | "2021-03-01T11:08:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1950",
"merged_at": "2021-03-01T11:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1950"
} | 1) updated fields which were missing earlier
2) added tags to README
3) updated a few fields of README
4) new dataset_infos.json and dummy files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1950/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1949/comments | https://api.github.com/repos/huggingface/datasets/issues/1949/events | https://github.com/huggingface/datasets/issues/1949 | 816,986,936 | MDU6SXNzdWU4MTY5ODY5MzY= | 1,949 | Enable Fast Filtering using Arrow Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | open | false | null | [] | null | 2 | "2021-02-26T02:53:37Z" | "2021-02-26T19:18:29Z" | null | CONTRIBUTOR | null | null | null | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-;
Any help would be appreciated.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1949/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1948/comments | https://api.github.com/repos/huggingface/datasets/issues/1948/events | https://github.com/huggingface/datasets/issues/1948 | 816,689,329 | MDU6SXNzdWU4MTY2ODkzMjk= | 1,948 | dataset loading logger level | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 3 | "2021-02-25T18:33:37Z" | "2023-07-12T17:19:30Z" | "2023-07-12T17:19:30Z" | CONTRIBUTOR | null | null | null | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow
```
why are those WARNINGs? Should be INFO, no?
warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.
Thank you.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1948/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1947/comments | https://api.github.com/repos/huggingface/datasets/issues/1947/events | https://github.com/huggingface/datasets/pull/1947 | 816,590,299 | MDExOlB1bGxSZXF1ZXN0NTgwMjI2MDk5 | 1,947 | Update documentation with not in place transforms and update DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-02-25T16:23:18Z" | "2021-03-01T14:36:54Z" | "2021-03-01T14:36:53Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1947.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1947",
"merged_at": "2021-03-01T14:36:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1947.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1947"
} | In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`.
I added them to the documentation and added a paragraph on how to use them
You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-removing-casting-and-flattening-columns)
I also added these methods to the DatasetDict class. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1947/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1946/comments | https://api.github.com/repos/huggingface/datasets/issues/1946/events | https://github.com/huggingface/datasets/pull/1946 | 816,526,294 | MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2 | 1,946 | Implement Dataset from CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2021-02-25T15:10:13Z" | "2021-03-12T09:42:48Z" | "2021-03-12T09:42:48Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1946",
"merged_at": "2021-03-12T09:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1946"
} | Implement `Dataset.from_csv`.
Analogue to #1943.
If finally, the scripts should be used instead, at least we can reuse the tests here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1946/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1946/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1945/comments | https://api.github.com/repos/huggingface/datasets/issues/1945/events | https://github.com/huggingface/datasets/issues/1945 | 816,421,966 | MDU6SXNzdWU4MTY0MjE5NjY= | 1,945 | AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 1 | "2021-02-25T13:09:45Z" | "2021-02-25T13:20:35Z" | "2021-02-25T13:20:26Z" | NONE | null | null | null | Hi
I am trying to concatenate a list of huggingface datastes as:
` train_dataset = datasets.concatenate_datasets(train_datasets)
`
Here is the `train_datasets` when I print:
```
[Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 120361
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2670
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 6944
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 38140
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 173711
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 1655
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 4274
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2019
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2109
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 11963
})]
```
I am getting the following error:
`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
`
I was wondering if you could help me with this issue, thanks a lot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1945/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1944/comments | https://api.github.com/repos/huggingface/datasets/issues/1944/events | https://github.com/huggingface/datasets/pull/1944 | 816,267,216 | MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3 | 1,944 | Add Turkish News Category Dataset (270K - Lite Version) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
} | [] | closed | false | null | [] | null | 2 | "2021-02-25T09:45:22Z" | "2021-03-02T17:46:41Z" | "2021-03-01T18:23:21Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1944",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1944"
} | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
@SBrandeis @lhoestq, can you please review this PR?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1944/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1943/comments | https://api.github.com/repos/huggingface/datasets/issues/1943/events | https://github.com/huggingface/datasets/pull/1943 | 816,160,453 | MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0 | 1,943 | Implement Dataset from JSON and JSON Lines | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 11 | "2021-02-25T07:17:33Z" | "2021-03-18T09:42:08Z" | "2021-03-18T09:42:08Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1943",
"merged_at": "2021-03-18T09:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1943"
} | Implement `Dataset.from_jsonl`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1943/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1942/comments | https://api.github.com/repos/huggingface/datasets/issues/1942/events | https://github.com/huggingface/datasets/issues/1942 | 816,037,520 | MDU6SXNzdWU4MTYwMzc1MjA= | 1,942 | [experiment] missing default_experiment-1-0.arrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 18 | "2021-02-25T03:02:15Z" | "2022-10-05T13:08:45Z" | "2022-10-05T13:08:45Z" | CONTRIBUTOR | null | null | null | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files.
w/o the network I get:
```
FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
```
there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock`
I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.
this is with master.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1942/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1941/comments | https://api.github.com/repos/huggingface/datasets/issues/1941/events | https://github.com/huggingface/datasets/issues/1941 | 815,985,167 | MDU6SXNzdWU4MTU5ODUxNjc= | 1,941 | Loading of FAISS index fails for index_name = 'exact' | {
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mkserge",
"id": 2992022,
"login": "mkserge",
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"repos_url": "https://api.github.com/users/mkserge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mkserge"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2021-02-25T01:30:54Z" | "2021-02-25T14:28:46Z" | "2021-02-25T14:28:46Z" | CONTRIBUTOR | null | null | null | Hi,
It looks like loading of FAISS index now fails when using index_name = 'exact'.
For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).
Running `transformers==4.3.2` and datasets installed from source on latest `master` branch.
```bash
(venv) sergey_mkrtchyan datasets (master) $ python
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
>>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
>>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
0%| | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained
return cls(
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__
self.init_retrieval()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval
self.index.init_index()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index
self.dataset = load_dataset(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset
datasets = utils.map_nested(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset
post_processed = self._post_process(ds, resources_paths)
File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process
dataset.add_faiss_index("embeddings", custom_index=index)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index
super().add_faiss_index(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index
faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors
self.faiss_index.add(vecs)
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add
self.add_c(n, swig_ptr(x))
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add
return _swigfaiss.IndexHNSW_add(self, n, x)
RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed
>>>
```
The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1941/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1940/comments | https://api.github.com/repos/huggingface/datasets/issues/1940/events | https://github.com/huggingface/datasets/issues/1940 | 815,770,012 | MDU6SXNzdWU4MTU3NzAwMTI= | 1,940 | Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()` | {
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/francisco-perez-sorrosal",
"id": 918006,
"login": "francisco-perez-sorrosal",
"node_id": "MDQ6VXNlcjkxODAwNg==",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/francisco-perez-sorrosal"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 2 | "2021-02-24T19:18:56Z" | "2021-03-23T15:26:49Z" | "2021-03-23T15:26:49Z" | CONTRIBUTOR | null | null | null | Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end:
```python
def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter):
label = int(example['label'])
current_counter = counter.get(label, 0)
if current_counter < per_class_limit:
counter[label] = current_counter + 1
return True
return False
```
At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this:
```python
...
kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()}
datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs)
...
```
The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290).
When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained.
In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call.
I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect.
Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1)
Thanks in advance,
Francisco Perez-Sorrosal
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1940/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1939/comments | https://api.github.com/repos/huggingface/datasets/issues/1939/events | https://github.com/huggingface/datasets/issues/1939 | 815,680,510 | MDU6SXNzdWU4MTU2ODA1MTA= | 1,939 | [firewalled env] OFFLINE mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 7 | "2021-02-24T17:13:42Z" | "2021-03-05T05:09:54Z" | "2021-03-05T05:09:54Z" | CONTRIBUTOR | null | null | null | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:
```
DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...
```
`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem
```
DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Common Issues
1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided
```
if dataset and path in _PACKAGED_DATASETS_MODULES:
```
2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging.
I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`
Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1939/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1938/comments | https://api.github.com/repos/huggingface/datasets/issues/1938/events | https://github.com/huggingface/datasets/pull/1938 | 815,647,774 | MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw | 1,938 | Disallow ClassLabel with no names | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-02-24T16:37:57Z" | "2021-02-25T11:27:29Z" | "2021-02-25T11:27:29Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1938",
"merged_at": "2021-02-25T11:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1938"
} | It was possible to create a ClassLabel without specifying the names or the number of classes.
This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str.
cc @justin-yan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1937/comments | https://api.github.com/repos/huggingface/datasets/issues/1937/events | https://github.com/huggingface/datasets/issues/1937 | 815,163,943 | MDU6SXNzdWU4MTUxNjM5NDM= | 1,937 | CommonGen dataset page shows an error OSError: [Errno 28] No space left on device | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuchenlin",
"id": 10104354,
"login": "yuchenlin",
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuchenlin"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | "2021-02-24T06:47:33Z" | "2021-02-26T11:10:06Z" | "2021-02-26T11:10:06Z" | CONTRIBUTOR | null | null | null | The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows

| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1937/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1936/comments | https://api.github.com/repos/huggingface/datasets/issues/1936/events | https://github.com/huggingface/datasets/pull/1936 | 814,726,512 | MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4 | 1,936 | [WIP] Adding Support for Reading Pandas Category | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | 6 | "2021-02-23T18:32:54Z" | "2022-03-09T18:46:22Z" | "2022-03-09T18:46:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1936",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1936"
} | @lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014
The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.
Just the 4 line change below actually does seem to work:
```
>>> from datasets import Dataset
>>> import pandas as pd
>>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
>>> ds = Dataset.from_pandas(df)
>>> ds.to_pandas()
0
0 a
1 b
2 c
3 a
>>> ds.to_pandas().dtypes
0 category
dtype: object
```
save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are:
```
>>> ds.features.type
StructType(struct<0: int64>)
```
there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone
Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1936/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1935/comments | https://api.github.com/repos/huggingface/datasets/issues/1935/events | https://github.com/huggingface/datasets/pull/1935 | 814,623,827 | MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1 | 1,935 | add CoVoST2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patil-suraj",
"id": 27137566,
"login": "patil-suraj",
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patil-suraj"
} | [] | closed | false | null | [] | null | 1 | "2021-02-23T16:28:16Z" | "2021-02-24T18:09:32Z" | "2021-02-24T18:05:09Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1935",
"merged_at": "2021-02-24T18:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1935"
} | This PR adds the CoVoST2 dataset for speech translation and ASR.
https://github.com/facebookresearch/covost#covost-2
The dataset requires manual download as the download page requests an email address and the URLs are temporary.
The dummy data is a bit bigger because of the mp3 files and 36 configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1935/timeline | null | null | true |
Subsets and Splits