url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
โ | pull_request
dict | body
stringlengths 0
228k
โ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/824/comments | https://api.github.com/repos/huggingface/datasets/issues/824/events | https://github.com/huggingface/datasets/issues/824 | 739,896,526 | MDU6SXNzdWU3Mzk4OTY1MjY= | 824 | Discussion using datasets in offline mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/77193?v=4",
"events_url": "https://api.github.com/users/mandubian/events{/privacy}",
"followers_url": "https://api.github.com/users/mandubian/followers",
"following_url": "https://api.github.com/users/mandubian/following{/other_user}",
"gists_url": "https://api.github.com/users/mandubian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mandubian",
"id": 77193,
"login": "mandubian",
"node_id": "MDQ6VXNlcjc3MTkz",
"organizations_url": "https://api.github.com/users/mandubian/orgs",
"received_events_url": "https://api.github.com/users/mandubian/received_events",
"repos_url": "https://api.github.com/users/mandubian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mandubian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mandubian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mandubian"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | 11 | "2020-11-10T13:10:51Z" | "2023-10-26T09:26:26Z" | "2022-02-15T10:32:36Z" | NONE | null | null | null | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some points to open discussion:
- if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.
- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.
- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.
WDYT? (thks)
| {
"+1": 10,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/824/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/823/comments | https://api.github.com/repos/huggingface/datasets/issues/823/events | https://github.com/huggingface/datasets/issues/823 | 739,815,763 | MDU6SXNzdWU3Mzk4MTU3NjM= | 823 | how processing in batch works in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 3 | "2020-11-10T11:11:17Z" | "2020-11-10T13:11:10Z" | "2020-11-10T13:11:09Z" | NONE | null | null | null | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
max_source_length: str = NotImplemented
max_target_length: str = NotImplemented
# TODO: should not be a task item, but cannot see other ways.
tpu_num_cores: int = None
# The arguments set are for all tasks and needs to be kept common.
def __init__(self, config):
self.max_source_length = config['max_source_length']
self.max_target_length = config['max_target_length']
self.tokenizer = config['tokenizer']
self.tpu_num_cores = config['tpu_num_cores']
def _encode(self, batch) -> Dict[str, torch.Tensor]:
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
return_tensors="pt"
)
return batch_encoding.data
def data_split(self, split):
return self.split_to_data_split[split]
def get_dataset(self, split, n_obs=None):
split = self.data_split(split)
if n_obs is not None:
split = split+"[:{}]".format(n_obs)
dataset = load_dataset(self.task_name, split=split)
dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
return dataset
```
I call it like
`AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train)
`
This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks
File "finetune_multitask_trainer.py", line 192, in main
if training_args.do_train else None
File "finetune_multitask_trainer.py", line 191, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda>
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode
[x["src_texts"] for x in batch],
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp>
[x["src_texts"] for x in batch],
TypeError: string indices must be integers
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/823/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/822/comments | https://api.github.com/repos/huggingface/datasets/issues/822/events | https://github.com/huggingface/datasets/issues/822 | 739,579,314 | MDU6SXNzdWU3Mzk1NzkzMTQ= | 822 | datasets freezes | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 2 | "2020-11-10T05:10:19Z" | "2023-07-20T16:08:14Z" | "2023-07-20T16:08:13Z" | NONE | null | null | null | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/822/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/821/comments | https://api.github.com/repos/huggingface/datasets/issues/821/events | https://github.com/huggingface/datasets/issues/821 | 739,506,859 | MDU6SXNzdWU3Mzk1MDY4NTk= | 821 | `kor_nli` dataset doesn't being loaded properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4",
"events_url": "https://api.github.com/users/sackoh/events{/privacy}",
"followers_url": "https://api.github.com/users/sackoh/followers",
"following_url": "https://api.github.com/users/sackoh/following{/other_user}",
"gists_url": "https://api.github.com/users/sackoh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sackoh",
"id": 30492059,
"login": "sackoh",
"node_id": "MDQ6VXNlcjMwNDkyMDU5",
"organizations_url": "https://api.github.com/users/sackoh/orgs",
"received_events_url": "https://api.github.com/users/sackoh/received_events",
"repos_url": "https://api.github.com/users/sackoh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sackoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sackoh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sackoh"
} | [] | closed | false | null | [] | null | 0 | "2020-11-10T02:04:12Z" | "2020-11-16T13:59:12Z" | "2020-11-16T13:59:12Z" | NONE | null | null | null | There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
- I found a reason why there is `None` values in label feature as following code
```python
from datasets import load_dataset
kor_nli_train = load_dataset('kor_nli', 'multi_nli')
for idx, example in enumerate(kor_nli_train['train']):
if example['gold_label'] is None:
print(idx, example)
break
# 16835 {'gold_label': None, 'sentence1': '๊ทธ๋ ์ ์ ์ ์ ๊ฐ๋ฒผ์ด ๋ฒ
์คํจ ์๋ง์ ๊ฐ์ง๊ณ ๋ฌ๋ฆฌ๊ธฐ ์ํด ์ฐ์ ์ฒ๋ผ ํ์ ์คํฐ๋๋ฅผ ๋ฃ์๋ค.\t์ ์ ์ ์ ๋ค์ธ์ข
์ฌ์ฑ๋ค๊ณผ ํจ๊ป ์๋ ๋ฐฑ์ธ ๋จ์๊ฐ ์์๋ค.\tentailment\n์ฌ๋ฆผ์ ์ฌ๋นจ๋ฆฌ ์ท์ ์
์๊ณ , ์๊ฐ์ ์ผ๋ก ๋ฏธ์ง๊ทผํ ๋ฌผ์ ๋ฟ๋ฆด ์ ์๋ ์์นจ ์ธํ๋ฌผ์ ๊ธฐ๊บผ์ด ๊ฐ๋์๋ค.\t์ฌ๋ฆผ์ ์ง์ฅ์ ๋ฆ์๋ค.\tneutral\n๋ด์์์ ๊ทธ ์์ฌ๋ฅผ ํด๋ดค๋๋ฐ, ๊ฑฐ๊ธฐ์ ์๊ณ ๊ธฐ์ ๋ฉ์ง ์๊ณ ๊ธฐ ๋ถ๋ถ์ ์๋ฆฌํ๊ณ ๋ฐ๋ฒ ํ๋ก ๋ง๋ ๋๋นค์ง ๊ฐ์ ๊ฑธ ๊ฐ์ ธ์๋๋ฐ, ์ ๋ง ๋๋จํด.\t๊ทธ๋ค์ด ๊ฑฐ๊ธฐ์ ์๋ฆฌํ๋ ์ ๊ณ ๊ธฐ๋ ์ญ๊ฒน๋ค. ๊ฑฐ๊ธฐ์ ์ ๋ ๋จน์ง ๋ง๋ผ.\tcontradiction\nํ๋งค์์ ์ฃฝ์์์ ๋ธ๋ผ์ด์ธ ๋ฐ๋คํ... ํฌ๋ฆฌ์ค ์ผ๋ฆฌ\tํฌ๋ฆฌ์ค ์ผ๋ฆฌ๋ ์ธ์ผ์ฆ๋งจ์ ์ฃฝ์์ ์ธ๊ธํ์ง ์๋๋ค.\tcontradiction\n๊ทธ๋ฌ๋ ๋์ ์๋ฆฌ์ฌ๋ ๊ทธ๋ฅ ํ๊ฐ ๋ฌ์ด.\t์คํ๊ฐ ๋๋ ๋์ ์๋ฆฌ์ฌ๋ ํ๊ฐ ๋ฌ๋ค.\tneutral\n๋ง์ง๋ง ๋ก๋ง์ ๋งน๊ณต๊ฒฉ ์ ๋ ๋ฐค, 900๋ช
์ด์์ ์ ๋์ธ ์๋น์๋ค์ด ๋ก๋ง์ธ๋ค์๊ฒ ๊ทธ๋ค์ ์ฌ๋ก์ก๋ ์น๋ฆฌ๋ฅผ ์ฃผ๊ธฐ ๋ณด๋ค๋ ๋๋ ์์ด์ ์ ์ง๋ ๋ค.\t๋ก๋ง์ธ๋ค์ด ๊ทธ๋ค์ ํฌํ์ ์น๋ฆฌํ๋๋ก ๋ด๋ฒ๋ ค๋๊ธฐ ๋ณด๋ค๋ 900๋ช
์ ์ ๋์ธ ์๋น์๋ค์ด ์์ดํ๋ค.\tentailment\n์์ผ๋ก ๋ฐ์ฌํ๋ผ.\t๋ฐ์ฌ.\tneutral\n๊ทธ๋ฆฌ๊ณ ๋น์ ์ ์ฐ๋ฆฌ ๋
์ด ์์ด์ปค์ ์๋ค๋ ๊ฒ์ ์๊ณ ์๋ค. ์ฐ๋ฆฌ ์ฌ๋๋ค์ ์ด๋ค ๊ฒ์ด ์ผ๋ง๋ ๋ง์์ง ์ดํดํ์ง ๋ชปํ ๊ฒ์ด๋ค.\t๋ชจ๋ ์ฌ๋๋ค์ ์ฐ๋ฆฌ์ ์ธก์ ์์คํ
์ด ์ด๋ป๊ฒ ์๋ํ๋์ง ์๊ณ ์ดํดํฉ๋๋ค.\tcontradiction\n์ฃผ๋ฏธ๊ฒ์ค\tJumiyges๋ ๋์์ ์ด๋ฆ์ด๋ค.\tneutral\n์ฌ๋์ ์๊ธฐ ๋ฏผ์กฑ์ ๋๋ด์ผ ํ๋ค...\t์ฌ๋์ ์กฐ๊ตญ์ ๊ณต๊ฐํด์ผ ํ๋ค.\tentailment\n๋ํ PDD 63์ ์ ๋ถ์ ์
๊ณ๊ฐ ์ปดํจํฐ ๊ธฐ๋ฐ ๊ณต๊ฒฉ์ ๋ํด ๊ฒฝ๊ณ ํ๊ณ ๋ฐฉ์ดํ ์ค๋น๋ฅผ ๋ ์ํ ์ ์๋๋ก ์์คํ
์ทจ์ฝ์ฑ, ์ํ, ์นจ์
๋ฐ ์ด์์ ๋ํ ์ ๋ณด๋ฅผ ๊ณต์ ํ๋ ๋ฉ์ปค๋์ฆ์ ์๋ฆฝํ๋ ๊ฒ์ด ์ค์ํ๋ค๋ ๊ฒ์ ์ธ์ํ์ต๋๋ค.\t์ ๋ณด ์ ์ก ํ๋กํ ์ฝ์ ๋ง๋๋ ๊ฒ์ ์ค์ํ๋ค.\tentailment\n์นดํ ๋ง ํผ์์ ๋ธ๋ผ ๋ ํ๋ธ๋ฆฌ์นด ๋ฐ๋ก ๋จ์ชฝ์๋ ํผ๋ ์ฒด๊ฐ ์๋ ค์ง ์ง ์ ํ ๋๋ฌธ์ ํ๋ ์คํธ๋ก ๋ง์ผ์ด๋ผ๊ณ ๋ถ๋ ธ๋ 16์ธ๊ธฐ ๋ก์ง์์ธ ๋ฉ๋ฅด์นดํ ๋์ค๋ณด(Mercato Nuovo)๊ฐ ์๋ค.\tํผ์์ ๋ธ๋ผ ๋ ํ๋ธ๋ฆฌ์นด์๋ ์นดํ๊ฐ ๋ง์ด ์๋ค.\tentailment\n์ฐ๋ฆฌ๊ฐ ์ฌ๊ธฐ ์๋ ํ ํธ๋ฆฐํ์ด ๋ญ ์ฃผ์ ๋์ง ์ดํด๋ด์ผ๊ฒ ์ด\t์ฐ๋ฆฌ๋ ํธ๋ฆฐํ์ด ๋ฌด์์ ์ฃผ์ ๋์ง ๋ณด๋ ๋ฐ ์๊ฐ์ ๋ญ๋นํ์ง ์์ ๊ฒ์ด๋ค.\tcontradiction\n๊ทธ๋ฌ๋ ์ผํธ์กฑ์ ๋ฌธํ์ ๊ธฐ๋ฐ์ ๊ฐ์ง ์์ผ๋๋ ๊ตํ๋ ์ ๋ฝ์ ์ ํฅ ๊ธฐ๋
๊ต ์ธ๊ณ์๋ ๋ค๋ฅด๊ฒ ๋ฐ์ ํ๊ณ ๊ฒฐ๊ตญ ๋ก๋ง์ ์ค์์ง๊ถ์ ํ์ ์ผ๋ก ๋์ฒด๋์๋ค.\t์์ผ๋๋ ๊ตํ์๋ ์ผํธ์กฑ์ ๊ธฐ์ง๊ฐ ์์๋ค.\tentailment\n๊ธ์, ๋ ์ ํ์ ์ฌ์ง๊ฐ ์์ด\t๊ธ์, ๋์๊ฒ ๋ง์ ์ ํ๊ถ์ด ์์ด.\tcontradiction\n์ฌ์ค, ๊ณต์์ ์ธ ๋ณด์ฅ์ ์๋ค.\t๋ด๊ฐ ์ฐ ๋ฌผ๊ฑด์ ๋ํ ๋ณด์ฆ์ด ์์๋ค.\tneutral\n๋ ํ๊ธฐ์ฐจ๊ธด ํ์ง๋ง, ์์์ ๋ฅด ๋ถ๋ฅด์ ฏ์ ์ฌ๋์ค๋ฌ์ด ํธ์์์๋ ์ถ์ ๋๊ฐ์ด ์์พํ๋ค.\t์์์ ๋ฅด ๋ถ๋ฅด๊ฒ์์๋ ํธ์์์์ ํ๋์ด ์๋๋ฅด๊ณ ๋ฐ์ ๋ถ์๊ธฐ๋ฅผ ์ฐ์ถํ๋ค.\tcontradiction\n๊ทธ์ ์ฌํ ์์์ด ์ด๋ฏธ ํผ์ก๋ค๋ฉด ๊ณต๊ฒฉ ์์๋ ํผ์ก์ ํ
์ง๋ง ๋ง์์์๋ ์ ํ ๊ณตํฉ์ ๊ธฐ๋ฏธ๊ฐ ๋ณด์ด์ง ์์๋ค.\t๊ทธ๋ ์ ๋ง์์ด ๋นํฉํ์ง ์์๋์ง ์ ์ ์์๋ค.\tneutral\n๊ณผ๊ฑฐ์๋ ์ฃฝ์์ ์ํ์ด ํ ์ง์ ํ๋งค๋ฅผ ๋ง๋ ๋ฐ ๊ฑฐ์ ๋์์ด ๋์ง ์์๋ค.\tํ ์ง ํ๋งค๋ ์ด๋ ํ ์ํ๋ ๊ตํํ์ง ์๊ณ ์ด๋ฃจ์ด์ง๋ค.\tcontradiction\n์ด๋ ์์ ์ ์ด๋ฅด๋ฌ ๋๋ ์ง๊ธ ๋ค๊ฐ์ค๋ ์๋ก์ด ๊ฒ๋ค๊ณผ ๋์ค๋ ๋ง์ ์๋ก์ด ๊ฒ๋ค์ด ๋ด๊ฐ ๋์ด๊ฐ๊ณ ์๋ค๊ณ ๋งํ๋ ์๋๋ก ์ ์ด๋ค๊ณ ์๋ค.\t๋๋ ์ฌ์ ํ ๋ด๊ฐ ๋ณด๋ ๋ชจ๋ ์๋ก์ด ๊ฒ์ ์ฌ๋ํ๋ค.\tcontradiction\n๋ด์ค์ํฌ๋ ๋ฌผ๋ฆฌํ์๋ค์ด ๊ฒฝ๊ธฐ์ฅ ํ์ฌ์์ ๊ณ ์๋๋ก์ ์๋์ฐจ ๊ตํต๊ณผ ๋ณดํ์ ๊ตํต์ ๊ฐ์ ํ๊ธฐ ์ํด ์๋ผ์ ์์ง์์ ์ฐ๊ตฌํ๊ณ ์๋ค๊ณ ๋งํ๋ค.\t๊ณ ์๋๋ก์ ์๋์ฐจ ๊ตํต ํ๋ฆ์ ๊ฐ์ ํ๋ ๊ฒ์ ๋ฌผ๋ฆฌํ์๋ค์ด ์๋ผ๋ฅผ ์ฐ๊ตฌํ๋ ์ด์ ์ค ํ๋์ด๋ค.\tentailment\n์ผ๋ง๋ ๋ค๋ฅธ๊ฐ? ๊ทธ๋ ์ ์ ๋ง์ ๋ฉ์ถ์๋ค๊ฐ ๋ง์ ์ด์๋ค.\t๊ทธ๋ ๊ทธ ์๋
๊ฐ ์ด๋์ ์๋์ง ์๊ณ ์ถ์๋ค.\tentailment\n๊ธ์, ๊ทธ์๊ฒ ๋๋ฌด ๋ง์ ๊ฒ์ ์ฃผ์ง๋ง.\t๊ทธ๋ ํจ์ฌ ๋ ๋ง์ ๊ฒ์ ์๊ตฌํ ๊ฒ์ด๋ค.\tneutral\n์๋ฌด๋ฆฌ ๊ทธ์ ์ฐฝ์๋ฌผ์ด ์๋ฒฝํด ๋ณด์ธ๋ค๊ณ ํด๋, ๊ทธ๋ค์ ๋ฏฟ๋ ๊ฒ์ ์๋ง๋ ์ข์ ์๊ฐ์ด ์๋ ๊ฒ์ด๋ค.\'\t๋์๊ธฐ๋ฅผ ์ ๋ง๋ ๋ค๊ณ ํด์ ๋๊ตฐ๊ฐ๋ฅผ ๋ฏฟ๋ ๊ฒ์ ์๋ง ์ข์ง ์์ ๊ฒ์ด๋ค.\tneutral\n๋ฒ์คํ๋ง ๊ทธ๋ ๋น์(Bustling Gran Via)๋ ํธํ
, ์์ , ๊ทน์ฅ, ๋์ดํธํด๋ฝ, ์นดํ ๋ฑ์ด ์ด์ฐ๋ฌ์ ธ ์ฐ์ฑ
๊ณผ ์ฐฝ๊ฐ๋ฅผ ๋ณผ ์ ์๋ค.\tGran Via๋ ํธํ
, ์์ , ๊ทน์ฅ, ๋์ดํธํด๋ฝ, ์นดํ์ ๋ฒํํ ์กฐํฉ์ด๋ค.\tentailment\n์ ๋ถ ์ธ์์\t๊ทธ ์ฌ๋ฌด์ค์ ์์ฑํด์ ์์นํด ์๋ค.\tneutral\n์ค์ ๋ฌธํ ์ ์์ด ์ด๋ ์๋์ง ์๊ณ ์ถ๋ค๋ฉด ํ์์ ์์ด๋ฒ๋ฆฌ๊ณ ์ค๋ฆฌ์ฝ ๋ฐธ๋ฆฌ์ ๋ ๋๋ชฌ๋๋ฅผ ์๊ฐํด ๋ณด๋ผ.\t์ค์ ๋ฌธํ ์ ์์ ๋ ๋๋ชฌ๋์์ ์ผ์ด๋๋ค.\tentailment\n๊ทธ๋ฆฌ๊ณ ํ๋์ค๋ฆฐ์ ์ฃผ์ง ์๊ธฐ ์ํด ์นจ๋ ์์ ์ฌ๋ ค๋จ์ด\t๊ทธ๋
์ ๋ฐฉ์๋ ํ๋์ค๋ฆฐ์ด ์๋ค๋ ์งํ๊ฐ ์ ํ ์์๋ค.\tcontradiction\nL.A.์ ์ผ์ธ ์์ฅ์ ํ๋ณดํ๋ ๊ฒ์ ๋ง์๊ณ ์ ๋ ดํ ๊ทธ๋ฃจ๋ธ๋ฅผ ์ก๊ณ , ๋์ด ์๋ ํ๋น์ ์ฆ๊ธฐ๊ณ , ์ ์ ํ ๋์ฐ๋ฌผ, ๊ฝ, ํฅ, ๊ทธ๋ฆฌ๊ณ ๊ฐ์ ฏ ๊ฐ๋ก์ด๋ฅผ ๊ตฌ์
ํ๋ฉด์ ํ์ง์ธ๋ค๊ณผ ์ด์ธ๋ฆด ์ ์๋ ํ๋ฅญํ ๋ฐฉ๋ฒ์ด๋ค.\tLA์ ์ผ์ธ ์์ฅ์ ๋์๋ค๋๋ ๊ฒ์ ์๊ฐ ๋ญ๋น๋ค.\tcontradiction\n์๋๋ ๋ฐ์ผ๋ก ๋์ ์๋์ ํ์จ์ ๋ด์ฌ์๋ค. ๋จ ํ ๋ฒ, ๊ทธ๋ฆฌ๊ณ ๋ง๋ฆฌํ์์ฌ ๋ง์ ์ ๋ก ๋๋ด์๋ ๊ฒฐ์ฌ์ด ๋ค์์ฌ ์์๋ค.\t์๋๋ ์์ฌํ๊ณ ๋ง๋ฆฌํ์์ฌ ๋ง์ ์ ์ ๋ค ๋ง์๊ธฐ๋ก ๊ฒฐ์ฌํ๋ค.\tentailment\n5 ์์ Vajpayee๋ ํต ์คํ์ ์ฑ๊ณต์ ์ธ ์๋ฃ๋ฅผ ๋ฐํํ๋๋ฐ, ์ธ๋์ธ๋ค์ ์ฃผ๊ถ์ ํ์๋ก ์ ์ ํ์ง๋ง ์ด์ ๊ตญ๊ฐ์ ์๊ตฌ์์ ์ธ๋ ๊ด๊ณ๋ฅผ ๋ณต์กํ๊ฒ ๋ง๋ค ์ ์์ต๋๋ค.\t์ธ๋๋ ์ฑ๊ณต์ ์ธ ํต์คํ์ ํ ์ ์ด ์๋ค.\tcontradiction\nํ๋ผ๋
ธ ์์์ ๋ณดํต ์ผ๋ง๋ ๋ง์ ๊ฒ์ ๊ฐ์ง๊ณ ์๋๊ฐ?\t์ ์ฌ๋๋ค ์ค์ ํ๋ผ๋
ธ ์์ ๊ฐ๋ณธ ์ฌ๋ ์์ด?\tcontradiction\n๊ทธ๊ฒ์ ์ ์ฒด์ ์ธ ํํ์ ์ฐ์ํจ์ ์ดํ ๊ฑด๋ํธ์์ ๊ฐ์ฅ ์ ๋ณผ ์ ์๋ค. ์๋ํ๋ฉด, ๋ก๋ง์ ์๋ ์ฑ ๋ฒ ๋๋ก์ฒ๋ผ, ๋์ ๊ธธ์ญํ ๋ณธ๋น ๋ค๋ก ๋ ๊ฐ๊น์ด ๊ณณ์ ์ฌ๋ผ์ง๊ธฐ ๋๋ฌธ์ด๋ค.\t์ฑ ๋ฒ ๋๋ก์ ๊ธธ์ญํ ๋ณธ๋น์ ๋์ ๊ฐ๋ฆฐ๋ค.\tentailment\n๋น์ ์ ์ํด์ด ์ด์ ๊ฐ๋ฐ์ ์ธ ๊ธฐ์จ์ ๊ฐ์ง๊ณ ๋๋๋ฅผ ๊ทธ๋ฆด ๊ฒ์ด๋ผ๊ณ ์๊ฐํ๊ฒ ์ง๋ง, ์๋์ค; ๊ทธ๋ ๊ทธ์ ๋ชจ๋ ๊ฒฝ๋ ฅ์์ ๋จ ํ ์ ๋ง์ ๊ทธ๋ ธ๊ณ , ๊ทธ๊ฒ์ ์ฌ์ํ ๊ทธ๋ฆผ์ด๋ค.\t๊ทธ๋ ๊ทธ๊ฒ์ด ๊ทธ๋ฅผ ๋ถํธํ๊ฒ ๋ง๋ค์๊ธฐ ๋๋ฌธ์ ํ๋๋ง ๊ทธ๋ ธ๋ค.\tneutral\n์ด ์ธ์์ ์ธ ํ๊ฒฝ์ ์๋ ๋ํฌ ๋ ์จ์ด ๋ฃจ๋ธ๋ฅด ๋ฐ๋ฌผ๊ด์ ์นจ์ค์์ ๋ณผ ์ ์๋๋ก ๊ณํ๋์๋๋ฐ, ๊ทธ ๋น์ ๊ถ์ ์ด์์ต๋๋ค.\t๋ํด๋ ์น์ ๊ทธ์ ๋ชจ๋ ๊ถ์ ์ ์๋ ๊ทธ์ ์นจ์ค์์ ๋ณด๋ ๊ฒฝ์น์ ๋ง์ ๊ด์ฌ์ ๊ฐ์ก๋ค.\tneutral\n๊ทธ๋ ์ฐ๋ฆฌ์๊ฒ ๋ฌธ ์ด์ ๋ฅผ ๊ฑด๋ค์ฃผ๊ณ ๋ ๊ธํ ๋ ๋ฌ๋ค.\t๊ทธ๋ ๊ธด์ฅํด์ ์ฐ๋ฆฌ์๊ฒ ์ด์ ๋ฅผ ๋นจ๋ฆฌ ์ฃผ์๋ค.\tneutral\n์์ํ๋ ๋ํ ์ต์ข
๊ท์น์ OMB์ ์ ์ถํ๋ค.\t์์ํ๋ ๋ํ ์ด ๊ท์น์ ๋ค๋ฅธ ๊ทธ๋ฃน์ ์ ์ถํ์ง๋ง ์ต์ข
๊ท์น์ OMB๊ฐ ํ๊ฐํ๊ธฐ ์ํ ๊ฒ์ด ์์ต๋๋ค.\tneutral\n์ ์๊ฐ๊ฒ์ ๊ฐ๋ณด๋ฉด ์ฌ๋ฆฌ๋น์์ ๋ณต์ ํํฉ๋ฌผ ๊ฐ์ ์ ์พํ ์ด๋ฆ์ ๊ฐ์ง ์ ํ๋ค์ ์ฐพ์ ์ ์์ ๊ฒ๋๋ค.์ด ์ ํ์ด ๋ฟ๋ฆฌ๋ฅผ ๋ด๋ฆฌ๋๋ก ๋๊ธฐ ์ํด ์ดฌ์์ ์ ๋จ๋ ๋์ ๋ฉํฌ์์ ํ๋ ํธ๋ฅด๋ชฌ์ ํผํฉ๋ฌผ์ด์ฃ .\t์ ์ ๊ฐ๊พธ๊ธฐ ๊ฐ๊ฒ์ ์ ํ๋ค์ ์ข
์ข
๊ทธ๋ค์ ๋ชฉ์ ์ ์ค๋ช
ํ๊ธฐ ์ํด ๊ธฐ์ ์ ์ผ๋ก๋ ๊ณผํ์ ์ผ๋ก ํ์๋ ์ด๋ฆ(์ฌ๋ฆฌ๋น์์ ๋ณต์ ํํฉ๋ฌผ์ฒ๋ผ)์ ๋ถ์ฌ๋ฐ๋๋ค.\tneutral\n์คํ๋ ์คํธ ์์ ์ด๋ ์ ๊ทธ๋
์ ์ด์ผ๊ธฐ๋ฅผ ๋ฐ๊พธ์๋์ง์ ํจ์ฌ ๋ ๊ด์ฌ์ด ์์ ๊ฒ์ด๋ค.\t์คํธ์ ์ด์ผ๊ธฐ๋ ์กฐ๊ธ๋ ๋ณํ์ง ์์๋ค.\tcontradiction\n๋จํธ๊ณผ์ ๋ง์ง๋ง ๋๊ฒฐ๋ก ๋งฅํฐ์ด๋ ๋
ธ๋ผ์ ๋ณ์ ์ ๋๋ฌด๋ ๋ฅ์ํ๊ฒ ์๊ณ ํด ์๊ธฐ ๋๋ฌธ์, ๊ทธ๋
์๊ฒ๋ ๋นํฉ์ค๋ฌ์ธ ์ ๋๋ก ๊ฐ์์ค๋ฌ์ด ๊ฒ์ฒ๋ผ ๋ณด์ด์ง๋ง, ์ฐ๋ฆฌ์๊ฒ๋ ๊ฐ์ ์ ์ผ๋ก ๋ถ๊ฐํผํด ๋ณด์ธ๋ค.\t๋
ธ๋ผ์ ๋ณ์ ์ ๋ถ๋ช
ํ๊ณ ํ์ฐ์ ์ด์๋ค.\tcontradiction\n์ด์งํธ ์ต๋จ๋จ ๋์์ธ ์์ค์์ ์ค๋ ์ญ์ฌ๋ฅผ ํตํด ์ค์ํ ์ญํ ์ ํด์๋ค.\t์์ค์์ ์ด์งํธ ๊ตญ๊ฒฝ ๋ฐ๋ก ์์ ์์นํด ์์ต๋๋ค.\tneutral\n๊ทธ๋ฌ๋ ํจ์ฌ ๋ ์ฐ์ํ ๊ฑด์ถ์ ํฐ์น๋ ์ ์ฑํ ์ถค์ธ Bharatanatyam์์ ์ํ๋ 108 ๊ฐ์ง ๊ธฐ๋ณธ ํฌ์ฆ๋ฅผ ์๋ฐ ํจ๋์์ ๋ณผ ์ ์์ต๋๋ค.\tํจ๋์ ๋ํ ์๋ฐ์ ๋ฌ์ฌ๋ ์ผ๋ฐ์ ์ธ ๋ชจํฐ๋ธ๋ค.\tneutral\nํธํ๋กญ๊ฒ ์ฌ์ด์ง ๊ณ๋จ์ ์ ์์ ์ดํ๋ฆฌ์ ํ์์ ๊ฐ์ฅ ํ๋ฅญํ ์์๋ธ ์ค ํ๋์
๋๋ค.\t์๋ฆ๋ค์ด ์ ์๊ณผ ํฌ๊ทํ ๊ฝ๊ฝ์ด ๋ชจ๋ ์ดํ๋ฆฌ์์ ํ์์ ์ธ ์คํ์ผ์ ๋ณด์ฌ์ค๋ค.\tneutral\n์, ๊ทธ๋ฌ์ผ๋ฉด ์ข์์ ํ
๋ฐ\t๋๋ ๊ทธ๊ฒ์ ๋ค๋ฅด๊ฒ ํ ๊ธฐํ๋ฅผ ๋ชน์ ๊ฐ๋งํ๋ค.\tentailment\nํํ๊ฐ ๋ ์ฑ์ ๊ธฐ์ญ์ ์๋ฆฌ์ก๊ณ ์๋ ์์ ์ค์ธ ๋์ ์ผ์ด์์ค๋ฒ๊ทธ๋ ๋
ธ๋ฒจ ํํ์ ์์์ ์๋ฒํธ ์๋ฐ์ด์ฒ(1875๋
)์ ์ถ์์ง๋ก ๋๋ฆฌ ์๋ ค์ ธ ์๋ค.\t์๋ฒํธ ์๋ฐ์ด์ฒ๋ ๋ ๋ค ์ผ์ด์์ค๋ฒ๊ทธ ๋ง์์ ์์๋ค.\tentailment\n๊ณ ๊ฐ๋๋ ๋ฌธ์ ๊ฐ ์๋ ๋๋ถ๋ถ์ ํ์๋ค์ด ๋ฐ๊ฒฌ๋ ๊ฒ์ ๋ณด์ฅํ๋ค.\t์ฅ๋น ๋ฏผ๊ฐ๋๋ ๋ฌธ์ ํ์ง์ ๊ด๋ จ์ด ์์ต๋๋ค.\tcontradiction\n์ค๋์ ํ์คํ ๋ฐ๋ฐ์ง ๊ฐ์ ๋ ์ด์์ด\t์ค๋ ์ฌ๋ฌด์ค์ ์๋ ๋ชจ๋ ์ฌ๋๋ค์ ๋ฐ๋ฐ์ง๋ฅผ ์
์๋ค.\tneutral\n๋ชป์๊ธด ํฑ์๋๋ฅผ ์
๊ณ .\t๊ทธ๊ฒ์ ๋ถํ์๊ณผ ์ฃผํฉ์์
๋๋ค.\tneutral\n์ด์ฃผ ๋
ธ๋ ์์ฉ์ ์ค ๋ง์ด ๊ฐ ๊ทธ๋ค์ ํ์ง ์์์ ์ฐ๋ค.\t๋
ธ๋ ์์ฉ์์๋ ํ์ง ์์์ ์ฌ๋ ์ด์ฃผ ๋
ธ๋์๋ค์ ์ฌ์ง์ด ์๋ค.\tneutral\n๊ทธ๋, ๊ทธ๊ฐ ์ ์ธ๊ณ๋ฅผ ์ฌํํ ํ์ ๊ทธ๋ฐ ๊ฑฐ์ผ\t๊ทธ๊ฒ์ ์ฌ๋๋ค์ ์ธ๊ณ ์ฌํ์ ๋ฐ๋ฅธ๋ค.\tentailment\n๊ฑด๋ํธ์ ํฌ๊ณ ํฐ ์ฐธ๋๋ฌด ๋ช ๊ทธ๋ฃจ๊ฐ ์๋ค.\t์ฐ๋ฆฌ๋ ์ฌ๊ธฐ ์คํฌ๋ ์ด๋ค ์ข
๋ฅ์ ๋ฏธ๊ตญ ๋๋ฌด๋ ์๋ค.\tcontradiction\nFort-de-France์์ ์ถ๋ฐํ๋ ์๋์ฐจ๋ ์ฌ๊ฐ์ ์ผ๋ก, ๋น์ ์ ์์ธ ? ๋ฐ๋ค ํฌ๋๊ฐ ๊ทธ๋์ ์ ๊ณตํ๋ ์พ์ ํ ๊ฐ์ ๋ชจ๋ ํด๋ณ๊ณผ ํผํฌ๋ ํ
์ด๋ธ, ์ด๋ฆฐ์ด ๋ฏธ๋๋ผํ, ์๋น์ด ์๋ ์๋์ ๋์ฐฉํ ์ ์๋ค.\tํ๋์ค ์์์์ ์๋์ฐจ๋ ํ๋ฆฌ๋ฅผ ํ๊ณ ์์ธ๋ก ๊ฐ ์ ์๋ค.\tentailment\n๊ทธ๋ฆฌ๊ณ ๊ทธ๊ฒ์ ์จ๋ผ๋ฐฐ๋ง์ฃผ๊ฐ ์์ํ๋ ๋๋ก ์์ฐ์์ 50๋ง ๋ฌ๋ฌ๋ฅผ ์ญ๊ฐํ์ง ์์ ๊ฒ์ด๋ผ๋ ๊ฒ์ ์๋ฏธํ๋ค.\t์จ๋ผ๋ฐฐ๋ง ์ฃผ๋ ์์ฐ ์ญ๊ฐ์ ํ์ง ์์๋ค. ์๋ํ๋ฉด ๊ทธ๋ ๊ฒ ํ๋ ๊ฒ์ ๋ํ ์ด๊ธฐ ์ ๋น์ฑ์ด ์ ๋ฐ ์กฐ์ฌ์ ๋ง์์ง ์์๊ธฐ ๋๋ฌธ์ด๋ค.\tneutral\n์์์ด ๋จผ์ ์ด .. ์ด .. ๋
ธ์ธ์ด๋ ๊ฐ์กฑ์ ์์์์ ๋ณด๋ด๋ ๊ฒ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํ๋?\t๊ฐ์กฑ์ ์์์์ ๋ณด๋ด์ ์ฌ๋ ๊ฒ์ ๋ํด ์ด๋ป๊ฒ ์๊ฐํ๋์ง ์ ํ์๊ฐ ์๋ค.\tcontradiction\n๋๋จธ์ง๋ ๋์๊ฒ ๋ฌ๋ ธ์ด.\t๋๋จธ์ง๋ ๋์๊ฒ ๋ฌ๋ ธ์ง๋ง ์๊ฐ์ด ๋ง์ง ์๋ค.\tneutral\n์-ํ , 3์์ ํ๋ณ์ ํ๋ ๊ฒ์ ๋ํด ๊ฑฑ์ ํ๋ฉด ์ ๋๋ค๋ ๊ฒ์ ์๊ณ ์๋ 3์์ด์ผ.\t3์์ ๊ทธ๋ ๊ฒ ๋ฅ์ง ์๋ค.\tneutral\n๊ทธ๋ฆฌ๊ณ ์ด, ๊ทธ๋ฐ ์์ ๊ฒ๋ค๋ก ๋ค์ ์์ํด๋ด. ์์ง ํจ์ฌ ์ธ. ์ด, ๊ทธ ํน๋ณํ ๋ชจ๋ธ ์ฐจ๋ 150๋ฌ๋ฌ์ผ.\t๊ทธ ๋ชจํ์ฐจ๋ 4์ฒ ๋ฌ๋ฌ๊ฐ ๋ ๋ค.\tcontradiction\n๋ด์ผ ๋์๊ฐ์ผ ํ๋ค๋ฉด, ์นผ์ด ๋งํ๋ค.\t๋์๊ฐ ์ ์์ด. ์ค๋์ ์ ๋ผ. ๋ด์ผ์ ์ ๋ผ. ์ ๋ ์ ๋ผ." ์นผ์ด ๋งํ๋ค.', 'sentence2': 'contradiction'}
```
2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in ๐ค Transformers
- `kor_nli` dataset has same data structure of multi_nli, xnli
- Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
}
),
```
If you don't mind, I would like to fix this.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/821/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/820/comments | https://api.github.com/repos/huggingface/datasets/issues/820/events | https://github.com/huggingface/datasets/pull/820 | 739,387,617 | MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0 | 820 | Update quail dataset to v1.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ngdodd",
"id": 4889636,
"login": "ngdodd",
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ngdodd"
} | [] | closed | false | null | [] | null | 0 | "2020-11-09T21:49:26Z" | "2020-11-10T09:06:35Z" | "2020-11-10T09:06:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/820.diff",
"html_url": "https://github.com/huggingface/datasets/pull/820",
"merged_at": "2020-11-10T09:06:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/820.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/820"
} | Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/820/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/820/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/819/comments | https://api.github.com/repos/huggingface/datasets/issues/819/events | https://github.com/huggingface/datasets/pull/819 | 739,250,624 | MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy | 819 | Make save function use deterministic global vars order | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 2 | "2020-11-09T18:12:03Z" | "2021-11-30T13:34:09Z" | "2020-11-11T15:20:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/819",
"merged_at": "2020-11-11T15:20:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/819"
} | The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work.
This should fix #816 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/819/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/818/comments | https://api.github.com/repos/huggingface/datasets/issues/818/events | https://github.com/huggingface/datasets/pull/818 | 739,173,861 | MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0 | 818 | Fix type hints pickling in python 3.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-11-09T16:27:47Z" | "2020-11-10T09:07:03Z" | "2020-11-10T09:07:02Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/818.diff",
"html_url": "https://github.com/huggingface/datasets/pull/818",
"merged_at": "2020-11-10T09:07:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/818.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/818"
} | Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler.
This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/818/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/817/comments | https://api.github.com/repos/huggingface/datasets/issues/817/events | https://github.com/huggingface/datasets/issues/817 | 739,145,369 | MDU6SXNzdWU3MzkxNDUzNjk= | 817 | Add MRQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | "2020-11-09T15:52:19Z" | "2020-12-04T15:44:42Z" | "2020-12-04T15:44:41Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task
- **Paper:** https://arxiv.org/abs/1910.09753
- **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019
- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/817/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/816/comments | https://api.github.com/repos/huggingface/datasets/issues/816/events | https://github.com/huggingface/datasets/issues/816 | 739,102,686 | MDU6SXNzdWU3MzkxMDI2ODY= | 816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-11-09T15:01:20Z" | "2020-11-11T15:20:50Z" | "2020-11-11T15:20:50Z" | MEMBER | null | null | null | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/816/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/815/comments | https://api.github.com/repos/huggingface/datasets/issues/815/events | https://github.com/huggingface/datasets/issues/815 | 738,842,092 | MDU6SXNzdWU3Mzg4NDIwOTI= | 815 | Is dataset iterative or not? | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 8 | "2020-11-09T09:11:48Z" | "2020-11-10T10:50:03Z" | "2020-11-10T10:50:03Z" | NONE | null | null | null | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/815/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/814/comments | https://api.github.com/repos/huggingface/datasets/issues/814/events | https://github.com/huggingface/datasets/issues/814 | 738,500,443 | MDU6SXNzdWU3Mzg1MDA0NDM= | 814 | Joining multiple datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | "2020-11-08T16:19:30Z" | "2020-11-08T19:38:48Z" | "2020-11-08T19:38:48Z" | NONE | null | null | null | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/814/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/813/comments | https://api.github.com/repos/huggingface/datasets/issues/813/events | https://github.com/huggingface/datasets/issues/813 | 738,489,852 | MDU6SXNzdWU3Mzg0ODk4NTI= | 813 | How to implement DistributedSampler with datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehkarimimahabadi",
"id": 73364383,
"login": "rabeehkarimimahabadi",
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehkarimimahabadi"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 4 | "2020-11-08T15:27:11Z" | "2022-10-05T12:54:23Z" | "2022-10-05T12:54:23Z" | NONE | null | null | null | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/813/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/812/comments | https://api.github.com/repos/huggingface/datasets/issues/812/events | https://github.com/huggingface/datasets/issues/812 | 738,340,217 | MDU6SXNzdWU3MzgzNDAyMTc= | 812 | Too much logging | {
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dspoka",
"id": 6183050,
"login": "dspoka",
"node_id": "MDQ6VXNlcjYxODMwNTA=",
"organizations_url": "https://api.github.com/users/dspoka/orgs",
"received_events_url": "https://api.github.com/users/dspoka/received_events",
"repos_url": "https://api.github.com/users/dspoka/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dspoka/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dspoka"
} | [] | closed | false | null | [] | null | 7 | "2020-11-07T23:56:30Z" | "2021-01-26T14:31:34Z" | "2020-11-16T17:06:42Z" | NONE | null | null | null | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/812/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/811/comments | https://api.github.com/repos/huggingface/datasets/issues/811/events | https://github.com/huggingface/datasets/issues/811 | 738,280,132 | MDU6SXNzdWU3MzgyODAxMzI= | 811 | nlp viewer error | {
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jc-hou",
"id": 30210529,
"login": "jc-hou",
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jc-hou"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 3 | "2020-11-07T17:08:58Z" | "2022-02-15T10:51:44Z" | "2022-02-14T15:24:20Z" | NONE | null | null | null | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/811/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/810/comments | https://api.github.com/repos/huggingface/datasets/issues/810/events | https://github.com/huggingface/datasets/pull/810 | 737,878,370 | MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3 | 810 | Fix seqeval metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2020-11-06T16:11:43Z" | "2020-11-09T14:04:29Z" | "2020-11-09T14:04:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/810",
"merged_at": "2020-11-09T14:04:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/810"
} | The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_name, score in report.items():
--> 104 scores[type_name]["precision"] = score["precision"]
105 scores[type_name]["recall"] = score["recall"]
106 scores[type_name]["f1"] = score["f1-score"]
KeyError: 'LOC'
```
This is because the current code basically tries to do:
```
scores = {}
scores["LOC"]["precision"] = some_value
```
which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/810/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/809/comments | https://api.github.com/repos/huggingface/datasets/issues/809/events | https://github.com/huggingface/datasets/issues/809 | 737,832,701 | MDU6SXNzdWU3Mzc4MzI3MDE= | 809 | Add Google Taskmaster dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 2 | "2020-11-06T15:10:41Z" | "2021-04-20T13:09:26Z" | "2021-04-20T13:09:26Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/809/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/808/comments | https://api.github.com/repos/huggingface/datasets/issues/808/events | https://github.com/huggingface/datasets/pull/808 | 737,638,942 | MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0 | 808 | dataset(dgs): initial dataset loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 2 | "2020-11-06T10:14:43Z" | "2021-03-23T06:18:55Z" | "2021-03-23T06:18:55Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/808.diff",
"html_url": "https://github.com/huggingface/datasets/pull/808",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/808.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/808"
} | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.
I am not sure how to manually create the dummy_data (what exactly it should contain)
Also note, this library says:
> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'
When you actually need to `pip install pympi-ling`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/808/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/807/comments | https://api.github.com/repos/huggingface/datasets/issues/807/events | https://github.com/huggingface/datasets/issues/807 | 737,509,954 | MDU6SXNzdWU3Mzc1MDk5NTQ= | 807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | {
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shexuan",
"id": 25664170,
"login": "shexuan",
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"repos_url": "https://api.github.com/users/shexuan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shexuan"
} | [] | closed | false | null | [] | null | 11 | "2020-11-06T06:33:04Z" | "2021-01-11T01:30:27Z" | "2020-11-14T05:30:34Z" | NONE | null | null | null | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/807/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/806/comments | https://api.github.com/repos/huggingface/datasets/issues/806/events | https://github.com/huggingface/datasets/issues/806 | 737,215,430 | MDU6SXNzdWU3MzcyMTU0MzA= | 806 | Quail dataset urls are out of date | {
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ngdodd",
"id": 4889636,
"login": "ngdodd",
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ngdodd"
} | [] | closed | false | null | [] | null | 3 | "2020-11-05T19:40:19Z" | "2020-11-10T14:02:51Z" | "2020-11-10T14:02:51Z" | CONTRIBUTOR | null | null | null | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/806/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/805/comments | https://api.github.com/repos/huggingface/datasets/issues/805/events | https://github.com/huggingface/datasets/issues/805 | 737,019,360 | MDU6SXNzdWU3MzcwMTkzNjA= | 805 | On loading a metric from datasets, I get the following error | {
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laibamehnaz",
"id": 36405283,
"login": "laibamehnaz",
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laibamehnaz"
} | [] | closed | false | null | [] | null | 1 | "2020-11-05T15:14:38Z" | "2022-02-14T15:32:59Z" | "2022-02-14T15:32:59Z" | NONE | null | null | null | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/805/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/804/comments | https://api.github.com/repos/huggingface/datasets/issues/804/events | https://github.com/huggingface/datasets/issues/804 | 736,858,507 | MDU6SXNzdWU3MzY4NTg1MDc= | 804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [] | closed | false | null | [] | null | 3 | "2020-11-05T11:38:01Z" | "2020-11-09T14:14:59Z" | "2020-11-09T14:14:58Z" | CONTRIBUTOR | null | null | null | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/804/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/803/comments | https://api.github.com/repos/huggingface/datasets/issues/803/events | https://github.com/huggingface/datasets/pull/803 | 736,818,917 | MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2 | 803 | fix: typos in tutorial to map KILT and TriviaQA | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [] | closed | false | null | [] | null | 0 | "2020-11-05T10:42:00Z" | "2020-11-10T09:08:07Z" | "2020-11-10T09:08:07Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/803.diff",
"html_url": "https://github.com/huggingface/datasets/pull/803",
"merged_at": "2020-11-10T09:08:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/803.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/803"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/803/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/802/comments | https://api.github.com/repos/huggingface/datasets/issues/802/events | https://github.com/huggingface/datasets/pull/802 | 736,296,343 | MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0 | 802 | Add XGlue | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 1 | "2020-11-04T17:29:54Z" | "2022-04-28T08:15:36Z" | "2020-12-01T15:58:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/802",
"merged_at": "2020-12-01T15:58:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/802"
} | Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ...
```
=> therefore one can load a single language test via
```python
load_dataset("xglue", "ner", split="test.es")
```
Close #749. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/801/comments | https://api.github.com/repos/huggingface/datasets/issues/801/events | https://github.com/huggingface/datasets/issues/801 | 735,790,876 | MDU6SXNzdWU3MzU3OTA4NzY= | 801 | How to join two datasets? | {
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shangw-nvidia",
"id": 66387198,
"login": "shangw-nvidia",
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shangw-nvidia"
} | [] | closed | false | null | [] | null | 3 | "2020-11-04T03:53:11Z" | "2020-12-23T14:02:58Z" | "2020-12-23T14:02:58Z" | NONE | null | null | null | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/801/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/800/comments | https://api.github.com/repos/huggingface/datasets/issues/800/events | https://github.com/huggingface/datasets/pull/800 | 735,772,775 | MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3 | 800 | Update loading_metrics.rst | {
"avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4",
"events_url": "https://api.github.com/users/ayushidalmia/events{/privacy}",
"followers_url": "https://api.github.com/users/ayushidalmia/followers",
"following_url": "https://api.github.com/users/ayushidalmia/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushidalmia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ayushidalmia",
"id": 5400513,
"login": "ayushidalmia",
"node_id": "MDQ6VXNlcjU0MDA1MTM=",
"organizations_url": "https://api.github.com/users/ayushidalmia/orgs",
"received_events_url": "https://api.github.com/users/ayushidalmia/received_events",
"repos_url": "https://api.github.com/users/ayushidalmia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ayushidalmia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushidalmia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ayushidalmia"
} | [] | closed | false | null | [] | null | 0 | "2020-11-04T02:57:11Z" | "2020-11-11T15:28:32Z" | "2020-11-11T15:28:32Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/800",
"merged_at": "2020-11-11T15:28:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/800"
} | Minor bug | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/800/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/799/comments | https://api.github.com/repos/huggingface/datasets/issues/799/events | https://github.com/huggingface/datasets/pull/799 | 735,551,165 | MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx | 799 | switch amazon reviews class label order | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
} | [] | closed | false | null | [] | null | 0 | "2020-11-03T18:38:58Z" | "2020-11-03T18:44:14Z" | "2020-11-03T18:44:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/799",
"merged_at": "2020-11-03T18:44:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/799"
} | Switches the label order to be more intuitive for amazon reviews, #791. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/799/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/798/comments | https://api.github.com/repos/huggingface/datasets/issues/798/events | https://github.com/huggingface/datasets/issues/798 | 735,518,805 | MDU6SXNzdWU3MzU1MTg4MDU= | 798 | Cannot load TREC dataset: ConnectionError | {
"avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4",
"events_url": "https://api.github.com/users/kaletap/events{/privacy}",
"followers_url": "https://api.github.com/users/kaletap/followers",
"following_url": "https://api.github.com/users/kaletap/following{/other_user}",
"gists_url": "https://api.github.com/users/kaletap/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kaletap",
"id": 25740957,
"login": "kaletap",
"node_id": "MDQ6VXNlcjI1NzQwOTU3",
"organizations_url": "https://api.github.com/users/kaletap/orgs",
"received_events_url": "https://api.github.com/users/kaletap/received_events",
"repos_url": "https://api.github.com/users/kaletap/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kaletap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaletap/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kaletap"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 9 | "2020-11-03T17:45:22Z" | "2022-02-14T15:34:22Z" | "2022-02-14T15:34:22Z" | NONE | null | null | null | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/798/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/797/comments | https://api.github.com/repos/huggingface/datasets/issues/797/events | https://github.com/huggingface/datasets/issues/797 | 735,420,332 | MDU6SXNzdWU3MzU0MjAzMzI= | 797 | Token classification labels are strings and we don't have the list of labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | [] | null | 4 | "2020-11-03T15:33:30Z" | "2022-02-14T15:41:54Z" | "2022-02-14T15:41:53Z" | CONTRIBUTOR | null | null | null | Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/797/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/795/comments | https://api.github.com/repos/huggingface/datasets/issues/795/events | https://github.com/huggingface/datasets/issues/795 | 735,198,265 | MDU6SXNzdWU3MzUxOTgyNjU= | 795 | Descriptions of raw and processed versions of wikitext are inverted | {
"avatar_url": "https://avatars.githubusercontent.com/u/16835358?v=4",
"events_url": "https://api.github.com/users/fraboniface/events{/privacy}",
"followers_url": "https://api.github.com/users/fraboniface/followers",
"following_url": "https://api.github.com/users/fraboniface/following{/other_user}",
"gists_url": "https://api.github.com/users/fraboniface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fraboniface",
"id": 16835358,
"login": "fraboniface",
"node_id": "MDQ6VXNlcjE2ODM1MzU4",
"organizations_url": "https://api.github.com/users/fraboniface/orgs",
"received_events_url": "https://api.github.com/users/fraboniface/received_events",
"repos_url": "https://api.github.com/users/fraboniface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fraboniface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fraboniface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fraboniface"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 2 | "2020-11-03T10:24:51Z" | "2022-02-14T15:46:21Z" | "2022-02-14T15:46:21Z" | NONE | null | null | null | Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/795/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/795/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/794/comments | https://api.github.com/repos/huggingface/datasets/issues/794/events | https://github.com/huggingface/datasets/issues/794 | 735,158,725 | MDU6SXNzdWU3MzUxNTg3MjU= | 794 | self.options cannot be converted to a Python object for pickling | {
"avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4",
"events_url": "https://api.github.com/users/hzqjyyx/events{/privacy}",
"followers_url": "https://api.github.com/users/hzqjyyx/followers",
"following_url": "https://api.github.com/users/hzqjyyx/following{/other_user}",
"gists_url": "https://api.github.com/users/hzqjyyx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hzqjyyx",
"id": 9635713,
"login": "hzqjyyx",
"node_id": "MDQ6VXNlcjk2MzU3MTM=",
"organizations_url": "https://api.github.com/users/hzqjyyx/orgs",
"received_events_url": "https://api.github.com/users/hzqjyyx/received_events",
"repos_url": "https://api.github.com/users/hzqjyyx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hzqjyyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzqjyyx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hzqjyyx"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 1 | "2020-11-03T09:27:34Z" | "2020-11-19T17:35:38Z" | "2020-11-19T17:35:38Z" | NONE | null | null | null | Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/794/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/793/comments | https://api.github.com/repos/huggingface/datasets/issues/793/events | https://github.com/huggingface/datasets/pull/793 | 735,105,907 | MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5 | 793 | [Datasets] fix discofuse links | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-11-03T08:03:45Z" | "2020-11-03T08:16:41Z" | "2020-11-03T08:16:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/793",
"merged_at": "2020-11-03T08:16:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/793"
} | The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/793/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/792/comments | https://api.github.com/repos/huggingface/datasets/issues/792/events | https://github.com/huggingface/datasets/issues/792 | 734,693,652 | MDU6SXNzdWU3MzQ2OTM2NTI= | 792 | KILT dataset: empty string in triviaqa input field | {
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PaulLerner",
"id": 25532159,
"login": "PaulLerner",
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PaulLerner"
} | [] | closed | false | null | [] | null | 1 | "2020-11-02T17:33:54Z" | "2020-11-05T10:34:59Z" | "2020-11-05T10:34:59Z" | CONTRIBUTOR | null | null | null | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five ยฃ', '5 ยฃ', 'ยฃ5', 'five ยฃ'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/792/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/791/comments | https://api.github.com/repos/huggingface/datasets/issues/791/events | https://github.com/huggingface/datasets/pull/791 | 734,656,518 | MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5 | 791 | add amazon reviews | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/joeddav",
"id": 9353833,
"login": "joeddav",
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"repos_url": "https://api.github.com/users/joeddav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/joeddav"
} | [] | closed | false | null | [] | null | 3 | "2020-11-02T16:42:57Z" | "2020-11-03T20:15:06Z" | "2020-11-03T16:43:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/791",
"merged_at": "2020-11-03T16:43:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/791"
} | Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/791/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/790/comments | https://api.github.com/repos/huggingface/datasets/issues/790/events | https://github.com/huggingface/datasets/issues/790 | 734,470,197 | MDU6SXNzdWU3MzQ0NzAxOTc= | 790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4",
"events_url": "https://api.github.com/users/shawwn/events{/privacy}",
"followers_url": "https://api.github.com/users/shawwn/followers",
"following_url": "https://api.github.com/users/shawwn/following{/other_user}",
"gists_url": "https://api.github.com/users/shawwn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shawwn",
"id": 59632,
"login": "shawwn",
"node_id": "MDQ6VXNlcjU5NjMy",
"organizations_url": "https://api.github.com/users/shawwn/orgs",
"received_events_url": "https://api.github.com/users/shawwn/received_events",
"repos_url": "https://api.github.com/users/shawwn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shawwn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawwn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shawwn"
} | [] | closed | false | null | [] | null | 2 | "2020-11-02T12:36:35Z" | "2020-11-10T14:05:02Z" | "2020-11-10T14:05:02Z" | NONE | null | null | null | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```


Python 3.7.7
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/790/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/789/comments | https://api.github.com/repos/huggingface/datasets/issues/789/events | https://github.com/huggingface/datasets/pull/789 | 734,237,839 | MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0 | 789 | dataset(ncslgr): add initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 4 | "2020-11-02T06:50:10Z" | "2020-12-01T13:41:37Z" | "2020-12-01T13:41:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/789",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/789"
} | Its a small dataset, but its heavily annotated
https://www.bu.edu/asllrp/ncslgr.html

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/789/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/788/comments | https://api.github.com/repos/huggingface/datasets/issues/788/events | https://github.com/huggingface/datasets/issues/788 | 734,136,124 | MDU6SXNzdWU3MzQxMzYxMjQ= | 788 | failed to reuse cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4",
"events_url": "https://api.github.com/users/WangHexie/events{/privacy}",
"followers_url": "https://api.github.com/users/WangHexie/followers",
"following_url": "https://api.github.com/users/WangHexie/following{/other_user}",
"gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WangHexie",
"id": 31768052,
"login": "WangHexie",
"node_id": "MDQ6VXNlcjMxNzY4MDUy",
"organizations_url": "https://api.github.com/users/WangHexie/orgs",
"received_events_url": "https://api.github.com/users/WangHexie/received_events",
"repos_url": "https://api.github.com/users/WangHexie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WangHexie"
} | [] | closed | false | null | [] | null | 0 | "2020-11-02T02:42:36Z" | "2020-11-02T12:26:15Z" | "2020-11-02T12:26:15Z" | NONE | null | null | null | I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/788/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/787/comments | https://api.github.com/repos/huggingface/datasets/issues/787/events | https://github.com/huggingface/datasets/pull/787 | 734,070,162 | MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz | 787 | Adding nli_tr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/e-budur",
"id": 2246791,
"login": "e-budur",
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"repos_url": "https://api.github.com/users/e-budur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/e-budur"
} | [] | closed | false | null | [] | null | 1 | "2020-11-01T21:49:44Z" | "2020-11-12T19:06:02Z" | "2020-11-12T19:06:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/787",
"merged_at": "2020-11-12T19:06:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/787"
} | Hello,
In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf)
The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub.
Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub.
```
from datasets import load_dataset
multinli_tr = load_dataset("nli_tr", "multinli_tr")
snli_tr = load_dataset("nli_tr", "snli_tr")
```
Thanks for your help in reviewing our pull request. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/787/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/786/comments | https://api.github.com/repos/huggingface/datasets/issues/786/events | https://github.com/huggingface/datasets/issues/786 | 733,761,717 | MDU6SXNzdWU3MzM3NjE3MTc= | 786 | feat(dataset): multiprocessing _generate_examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 2 | "2020-10-31T16:52:16Z" | "2023-01-16T10:59:13Z" | "2023-01-16T10:59:13Z" | CONTRIBUTOR | null | null | null | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/786/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/785/comments | https://api.github.com/repos/huggingface/datasets/issues/785/events | https://github.com/huggingface/datasets/pull/785 | 733,719,419 | MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1 | 785 | feat(aslg_pc12): add dev and test data splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 2 | "2020-10-31T13:25:38Z" | "2020-11-10T15:29:30Z" | "2020-11-10T15:29:30Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/785",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/785"
} | For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/785/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/784/comments | https://api.github.com/repos/huggingface/datasets/issues/784/events | https://github.com/huggingface/datasets/issues/784 | 733,700,463 | MDU6SXNzdWU3MzM3MDA0NjM= | 784 | Issue with downloading Wikipedia data for low resource language | {
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SamuelCahyawijaya",
"id": 2826602,
"login": "SamuelCahyawijaya",
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SamuelCahyawijaya"
} | [] | closed | false | null | [] | null | 5 | "2020-10-31T11:40:00Z" | "2022-02-09T17:50:16Z" | "2020-11-25T15:42:13Z" | NONE | null | null | null | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/784/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/783/comments | https://api.github.com/repos/huggingface/datasets/issues/783/events | https://github.com/huggingface/datasets/pull/783 | 733,536,254 | MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz | 783 | updated links to v1.3 of quail, fixed the description | {
"avatar_url": "https://avatars.githubusercontent.com/u/1450322?v=4",
"events_url": "https://api.github.com/users/annargrs/events{/privacy}",
"followers_url": "https://api.github.com/users/annargrs/followers",
"following_url": "https://api.github.com/users/annargrs/following{/other_user}",
"gists_url": "https://api.github.com/users/annargrs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/annargrs",
"id": 1450322,
"login": "annargrs",
"node_id": "MDQ6VXNlcjE0NTAzMjI=",
"organizations_url": "https://api.github.com/users/annargrs/orgs",
"received_events_url": "https://api.github.com/users/annargrs/received_events",
"repos_url": "https://api.github.com/users/annargrs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/annargrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annargrs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/annargrs"
} | [] | closed | false | null | [] | null | 1 | "2020-10-30T21:47:33Z" | "2020-11-29T23:05:19Z" | "2020-11-29T23:05:18Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/783",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/783"
} | updated links to v1.3 of quail, fixed the description | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/783/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/782/comments | https://api.github.com/repos/huggingface/datasets/issues/782/events | https://github.com/huggingface/datasets/pull/782 | 733,316,463 | MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0 | 782 | Fix metric deletion when attribuets are missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-30T16:16:10Z" | "2020-10-30T16:47:53Z" | "2020-10-30T16:47:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/782",
"merged_at": "2020-10-30T16:47:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/782"
} | When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.
I just added `if hasattr(...)` to make sure it doesn't crash | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/782/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/781/comments | https://api.github.com/repos/huggingface/datasets/issues/781/events | https://github.com/huggingface/datasets/pull/781 | 733,168,609 | MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw | 781 | Add XNLI train set | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2020-10-30T13:21:53Z" | "2022-06-09T23:26:46Z" | "2020-11-09T18:22:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"merged_at": "2020-11-09T18:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781"
} | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/781/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/780/comments | https://api.github.com/repos/huggingface/datasets/issues/780/events | https://github.com/huggingface/datasets/pull/780 | 732,738,647 | MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0 | 780 | Add ASNQ dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mkserge",
"id": 2992022,
"login": "mkserge",
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"repos_url": "https://api.github.com/users/mkserge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mkserge"
} | [] | closed | false | null | [] | null | 4 | "2020-10-29T23:31:56Z" | "2020-11-10T09:26:23Z" | "2020-11-10T09:26:23Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/780",
"merged_at": "2020-11-10T09:26:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/780"
} | This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti.
_Please note that I have no affiliation with the authors._
Repo: https://github.com/alexa/wqa_tanda
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/780/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/779/comments | https://api.github.com/repos/huggingface/datasets/issues/779/events | https://github.com/huggingface/datasets/pull/779 | 732,514,887 | MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0 | 779 | Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales | {
"avatar_url": "https://avatars.githubusercontent.com/u/11327413?v=4",
"events_url": "https://api.github.com/users/rathoreanirudh/events{/privacy}",
"followers_url": "https://api.github.com/users/rathoreanirudh/followers",
"following_url": "https://api.github.com/users/rathoreanirudh/following{/other_user}",
"gists_url": "https://api.github.com/users/rathoreanirudh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rathoreanirudh",
"id": 11327413,
"login": "rathoreanirudh",
"node_id": "MDQ6VXNlcjExMzI3NDEz",
"organizations_url": "https://api.github.com/users/rathoreanirudh/orgs",
"received_events_url": "https://api.github.com/users/rathoreanirudh/received_events",
"repos_url": "https://api.github.com/users/rathoreanirudh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rathoreanirudh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rathoreanirudh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rathoreanirudh"
} | [
{
"color": "E3165C",
"default": false,
"description": "",
"id": 4190228726,
"name": "transfer-to-evaluate",
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate"
}
] | closed | false | null | [] | null | 5 | "2020-10-29T17:31:14Z" | "2023-07-11T09:36:30Z" | "2023-07-11T09:36:30Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/779",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/779"
} | This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/778/comments | https://api.github.com/repos/huggingface/datasets/issues/778/events | https://github.com/huggingface/datasets/issues/778 | 732,449,652 | MDU6SXNzdWU3MzI0NDk2NTI= | 778 | Unexpected behavior when loading cached csv file? | {
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dcfidalgo",
"id": 15979778,
"login": "dcfidalgo",
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dcfidalgo"
} | [] | closed | false | null | [] | null | 2 | "2020-10-29T16:06:10Z" | "2020-10-29T21:21:27Z" | "2020-10-29T21:21:27Z" | CONTRIBUTOR | null | null | null | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/778/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/777/comments | https://api.github.com/repos/huggingface/datasets/issues/777/events | https://github.com/huggingface/datasets/pull/777 | 732,376,648 | MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2 | 777 | Better error message for uninitialized metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-29T14:42:50Z" | "2020-10-29T15:18:26Z" | "2020-10-29T15:18:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/777.diff",
"html_url": "https://github.com/huggingface/datasets/pull/777",
"merged_at": "2020-10-29T15:18:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/777.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/777"
} | When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/777/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/776/comments | https://api.github.com/repos/huggingface/datasets/issues/776/events | https://github.com/huggingface/datasets/pull/776 | 732,343,550 | MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx | 776 | Allow custom split names in text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 1 | "2020-10-29T14:04:06Z" | "2020-10-30T13:46:45Z" | "2020-10-30T13:23:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/776.diff",
"html_url": "https://github.com/huggingface/datasets/pull/776",
"merged_at": "2020-10-30T13:23:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/776.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/776"
} | The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/776/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/775/comments | https://api.github.com/repos/huggingface/datasets/issues/775/events | https://github.com/huggingface/datasets/pull/775 | 732,287,504 | MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3 | 775 | Properly delete metrics when a process is killed | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-29T12:52:07Z" | "2020-10-29T14:01:20Z" | "2020-10-29T14:01:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/775.diff",
"html_url": "https://github.com/huggingface/datasets/pull/775",
"merged_at": "2020-10-29T14:01:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/775.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/775"
} | Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory.
To fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/775/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/774/comments | https://api.github.com/repos/huggingface/datasets/issues/774/events | https://github.com/huggingface/datasets/pull/774 | 732,265,741 | MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0 | 774 | [ROUGE] Add description to Rouge metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2020-10-29T12:19:32Z" | "2020-10-29T17:55:50Z" | "2020-10-29T17:55:48Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/774",
"merged_at": "2020-10-29T17:55:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/774"
} | Add information about case sensitivity to ROUGE. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/774/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/773/comments | https://api.github.com/repos/huggingface/datasets/issues/773/events | https://github.com/huggingface/datasets/issues/773 | 731,684,153 | MDU6SXNzdWU3MzE2ODQxNTM= | 773 | Adding CC-100: Monolingual Datasets from Web Crawl Data | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
}
] | null | 4 | "2020-10-28T18:20:41Z" | "2022-01-26T13:22:54Z" | "2020-12-14T10:20:07Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/773/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/772/comments | https://api.github.com/repos/huggingface/datasets/issues/772/events | https://github.com/huggingface/datasets/pull/772 | 731,612,430 | MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx | 772 | Fix metric with cache dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-28T16:43:13Z" | "2020-10-29T09:34:44Z" | "2020-10-29T09:34:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/772",
"merged_at": "2020-10-29T09:34:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/772"
} | The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/772/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/771/comments | https://api.github.com/repos/huggingface/datasets/issues/771/events | https://github.com/huggingface/datasets/issues/771 | 731,482,213 | MDU6SXNzdWU3MzE0ODIyMTM= | 771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 3 | "2020-10-28T14:13:27Z" | "2023-02-13T20:16:39Z" | "2023-02-13T20:16:39Z" | CONTRIBUTOR | null | null | null | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/771/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/770/comments | https://api.github.com/repos/huggingface/datasets/issues/770/events | https://github.com/huggingface/datasets/pull/770 | 731,445,222 | MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1 | 770 | Fix custom builder caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-28T13:32:24Z" | "2020-10-29T09:36:03Z" | "2020-10-29T09:36:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/770",
"merged_at": "2020-10-29T09:36:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/770"
} | The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
Fix #750 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/770/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/769/comments | https://api.github.com/repos/huggingface/datasets/issues/769/events | https://github.com/huggingface/datasets/issues/769 | 731,257,104 | MDU6SXNzdWU3MzEyNTcxMDQ= | 769 | How to choose proper download_mode in function load_dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/48550398?v=4",
"events_url": "https://api.github.com/users/jzq2000/events{/privacy}",
"followers_url": "https://api.github.com/users/jzq2000/followers",
"following_url": "https://api.github.com/users/jzq2000/following{/other_user}",
"gists_url": "https://api.github.com/users/jzq2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jzq2000",
"id": 48550398,
"login": "jzq2000",
"node_id": "MDQ6VXNlcjQ4NTUwMzk4",
"organizations_url": "https://api.github.com/users/jzq2000/orgs",
"received_events_url": "https://api.github.com/users/jzq2000/received_events",
"repos_url": "https://api.github.com/users/jzq2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jzq2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzq2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jzq2000"
} | [] | closed | false | null | [] | null | 5 | "2020-10-28T09:16:19Z" | "2022-02-22T12:22:52Z" | "2022-02-22T12:22:52Z" | NONE | null | null | null | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/769/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/768/comments | https://api.github.com/repos/huggingface/datasets/issues/768/events | https://github.com/huggingface/datasets/issues/768 | 730,908,060 | MDU6SXNzdWU3MzA5MDgwNjA= | 768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | "2020-10-27T22:33:03Z" | "2020-10-28T08:58:13Z" | null | CONTRIBUTOR | null | null | null | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/768/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/767/comments | https://api.github.com/repos/huggingface/datasets/issues/767/events | https://github.com/huggingface/datasets/issues/767 | 730,771,610 | MDU6SXNzdWU3MzA3NzE2MTA= | 767 | Add option for named splits when using ds.train_test_split | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | "2020-10-27T19:59:44Z" | "2020-11-10T14:05:21Z" | null | CONTRIBUTOR | null | null | null | ### Feature Request ๐
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
โ
โ
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
| {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/767/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/766/comments | https://api.github.com/repos/huggingface/datasets/issues/766/events | https://github.com/huggingface/datasets/issues/766 | 730,669,596 | MDU6SXNzdWU3MzA2Njk1OTY= | 766 | [GEM] add DART data-to-text generation dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 2 | "2020-10-27T17:34:04Z" | "2020-12-03T13:37:18Z" | "2020-12-03T13:37:18Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/766/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/765/comments | https://api.github.com/repos/huggingface/datasets/issues/765/events | https://github.com/huggingface/datasets/issues/765 | 730,668,332 | MDU6SXNzdWU3MzA2NjgzMzI= | 765 | [GEM] Add DART data-to-text generation dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 0 | "2020-10-27T17:32:23Z" | "2020-10-27T17:34:21Z" | "2020-10-27T17:34:21Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/765/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/764/comments | https://api.github.com/repos/huggingface/datasets/issues/764/events | https://github.com/huggingface/datasets/pull/764 | 730,617,828 | MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2 | 764 | Adding Issue Template for Dataset Requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | 0 | "2020-10-27T16:37:08Z" | "2020-10-27T17:25:26Z" | "2020-10-27T17:25:25Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/764.diff",
"html_url": "https://github.com/huggingface/datasets/pull/764",
"merged_at": "2020-10-27T17:25:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/764.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/764"
} | adding .github/ISSUE_TEMPLATE/add-dataset.md | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/764/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/763/comments | https://api.github.com/repos/huggingface/datasets/issues/763/events | https://github.com/huggingface/datasets/pull/763 | 730,593,631 | MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx | 763 | Fixed errors in bertscore related to custom baseline | {
"avatar_url": "https://avatars.githubusercontent.com/u/36761132?v=4",
"events_url": "https://api.github.com/users/juanjucm/events{/privacy}",
"followers_url": "https://api.github.com/users/juanjucm/followers",
"following_url": "https://api.github.com/users/juanjucm/following{/other_user}",
"gists_url": "https://api.github.com/users/juanjucm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/juanjucm",
"id": 36761132,
"login": "juanjucm",
"node_id": "MDQ6VXNlcjM2NzYxMTMy",
"organizations_url": "https://api.github.com/users/juanjucm/orgs",
"received_events_url": "https://api.github.com/users/juanjucm/received_events",
"repos_url": "https://api.github.com/users/juanjucm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/juanjucm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juanjucm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/juanjucm"
} | [] | closed | false | null | [] | null | 0 | "2020-10-27T16:08:35Z" | "2020-10-28T17:59:25Z" | "2020-10-28T17:59:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/763",
"merged_at": "2020-10-28T17:59:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/763"
} | [bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`.
This PR fix those matching errors in bertscore metric implementation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/763/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/762/comments | https://api.github.com/repos/huggingface/datasets/issues/762/events | https://github.com/huggingface/datasets/issues/762 | 730,586,972 | MDU6SXNzdWU3MzA1ODY5NzI= | 762 | [GEM] Add Czech Restaurant data-to-text generation dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 0 | "2020-10-27T16:00:47Z" | "2020-12-03T13:37:44Z" | "2020-12-03T13:37:44Z" | MEMBER | null | null | null | - Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/762/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/761/comments | https://api.github.com/repos/huggingface/datasets/issues/761/events | https://github.com/huggingface/datasets/issues/761 | 729,898,867 | MDU6SXNzdWU3Mjk4OTg4Njc= | 761 | Downloaded datasets are not usable offline | {
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghazi-f",
"id": 25091538,
"login": "ghazi-f",
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghazi-f"
} | [] | closed | false | null | [] | null | 2 | "2020-10-26T20:54:46Z" | "2022-02-15T10:32:28Z" | "2022-02-15T10:32:28Z" | CONTRIBUTOR | null | null | null | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/761/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/761/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/760/comments | https://api.github.com/repos/huggingface/datasets/issues/760/events | https://github.com/huggingface/datasets/issues/760 | 729,637,917 | MDU6SXNzdWU3Mjk2Mzc5MTc= | 760 | Add meta-data to the HANS dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
}
] | null | 0 | "2020-10-26T14:56:53Z" | "2020-12-03T13:38:34Z" | "2020-12-03T13:38:34Z" | MEMBER | null | null | null | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/760/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/759/comments | https://api.github.com/repos/huggingface/datasets/issues/759/events | https://github.com/huggingface/datasets/issues/759 | 729,046,916 | MDU6SXNzdWU3MjkwNDY5MTY= | 759 | (Load dataset failure) ConnectionError: Couldnโt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AI678",
"id": 63541083,
"login": "AI678",
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"repos_url": "https://api.github.com/users/AI678/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AI678"
} | [] | closed | false | null | [] | null | 19 | "2020-10-25T15:34:57Z" | "2023-09-13T23:56:51Z" | "2021-08-04T18:10:09Z" | NONE | null | null | null | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(โcnn_dailymailโ, โ3.0.0โ, split=โtrainโ)
And I got the following errors.
Traceback (most recent call last):
File โtest.pyโ, line 7, in
test_dataset = load_dataset(โcnn_dailymailโ, โ3.0.0โ, split=โtestโ)
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyโ, line 589, in load_dataset
module_path, hash = prepare_module(
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyโ, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyโ, line 300, in cached_path
output_path = get_from_cache(
File โC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyโ, line 475, in get_from_cache
raise ConnectionError(โCouldnโt reach {}โ.format(url))
ConnectionError: Couldnโt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/759/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/758/comments | https://api.github.com/repos/huggingface/datasets/issues/758/events | https://github.com/huggingface/datasets/issues/758 | 728,638,559 | MDU6SXNzdWU3Mjg2Mzg1NTk= | 758 | Process 0 very slow when using num_procs with map to tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ksjae",
"id": 17930170,
"login": "ksjae",
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"repos_url": "https://api.github.com/users/ksjae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ksjae"
} | [] | closed | false | null | [] | null | 6 | "2020-10-24T02:40:20Z" | "2020-10-28T03:59:46Z" | "2020-10-28T03:59:45Z" | NONE | null | null | null | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/758/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/757/comments | https://api.github.com/repos/huggingface/datasets/issues/757/events | https://github.com/huggingface/datasets/issues/757 | 728,241,494 | MDU6SXNzdWU3MjgyNDE0OTQ= | 757 | CUDA out of memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4",
"events_url": "https://api.github.com/users/li1117heex/events{/privacy}",
"followers_url": "https://api.github.com/users/li1117heex/followers",
"following_url": "https://api.github.com/users/li1117heex/following{/other_user}",
"gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/li1117heex",
"id": 47059217,
"login": "li1117heex",
"node_id": "MDQ6VXNlcjQ3MDU5MjE3",
"organizations_url": "https://api.github.com/users/li1117heex/orgs",
"received_events_url": "https://api.github.com/users/li1117heex/received_events",
"repos_url": "https://api.github.com/users/li1117heex/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions",
"type": "User",
"url": "https://api.github.com/users/li1117heex"
} | [] | closed | false | null | [] | null | 8 | "2020-10-23T13:57:00Z" | "2020-12-23T14:06:29Z" | "2020-12-23T14:06:29Z" | NONE | null | null | null | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/757/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/756/comments | https://api.github.com/repos/huggingface/datasets/issues/756/events | https://github.com/huggingface/datasets/pull/756 | 728,211,373 | MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3 | 756 | Start community-provided dataset docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | 1 | "2020-10-23T13:17:41Z" | "2020-10-26T12:55:20Z" | "2020-10-26T12:55:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/756.diff",
"html_url": "https://github.com/huggingface/datasets/pull/756",
"merged_at": "2020-10-26T12:55:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/756.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/756"
} | Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.
I think the first naming is clearer, but I didn't address that here.
I didn't add metadata, will try that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/756/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/755/comments | https://api.github.com/repos/huggingface/datasets/issues/755/events | https://github.com/huggingface/datasets/pull/755 | 728,203,821 | MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2 | 755 | Start community-provided dataset docs V2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | 0 | "2020-10-23T13:07:30Z" | "2020-10-23T13:15:37Z" | "2020-10-23T13:15:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/755.diff",
"html_url": "https://github.com/huggingface/datasets/pull/755",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/755.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/755"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/755/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/754/comments | https://api.github.com/repos/huggingface/datasets/issues/754/events | https://github.com/huggingface/datasets/pull/754 | 727,863,105 | MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2 | 754 | Use full released xsum dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg"
} | [] | closed | false | null | [] | null | 3 | "2020-10-23T03:29:49Z" | "2021-01-01T03:11:56Z" | "2020-10-26T12:56:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/754",
"merged_at": "2020-10-26T12:56:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/754"
} | #672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/754/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/753/comments | https://api.github.com/repos/huggingface/datasets/issues/753/events | https://github.com/huggingface/datasets/pull/753 | 727,434,935 | MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0 | 753 | Fix doc links to viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Pierrci",
"id": 5020707,
"login": "Pierrci",
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Pierrci"
} | [] | closed | false | null | [] | null | 0 | "2020-10-22T14:20:16Z" | "2020-10-23T08:42:11Z" | "2020-10-23T08:42:11Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/753",
"merged_at": "2020-10-23T08:42:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/753"
} | It seems #733 forgot some links in the doc :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/753/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/752/comments | https://api.github.com/repos/huggingface/datasets/issues/752/events | https://github.com/huggingface/datasets/issues/752 | 726,917,801 | MDU6SXNzdWU3MjY5MTc4MDE= | 752 | Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning | {
"avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4",
"events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}",
"followers_url": "https://api.github.com/users/ogabrielluiz/followers",
"following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}",
"gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ogabrielluiz",
"id": 24829397,
"login": "ogabrielluiz",
"node_id": "MDQ6VXNlcjI0ODI5Mzk3",
"organizations_url": "https://api.github.com/users/ogabrielluiz/orgs",
"received_events_url": "https://api.github.com/users/ogabrielluiz/received_events",
"repos_url": "https://api.github.com/users/ogabrielluiz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ogabrielluiz"
} | [] | closed | false | null | [] | null | 2 | "2020-10-21T22:56:23Z" | "2020-10-22T16:19:42Z" | "2020-10-22T16:19:42Z" | NONE | null | null | null | Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/752/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/751/comments | https://api.github.com/repos/huggingface/datasets/issues/751/events | https://github.com/huggingface/datasets/issues/751 | 726,820,191 | MDU6SXNzdWU3MjY4MjAxOTE= | 751 | Error loading ms_marco v2.1 using load_dataset() | {
"avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4",
"events_url": "https://api.github.com/users/JainSahit/events{/privacy}",
"followers_url": "https://api.github.com/users/JainSahit/followers",
"following_url": "https://api.github.com/users/JainSahit/following{/other_user}",
"gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JainSahit",
"id": 30478979,
"login": "JainSahit",
"node_id": "MDQ6VXNlcjMwNDc4OTc5",
"organizations_url": "https://api.github.com/users/JainSahit/orgs",
"received_events_url": "https://api.github.com/users/JainSahit/received_events",
"repos_url": "https://api.github.com/users/JainSahit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JainSahit"
} | [] | closed | false | null | [] | null | 3 | "2020-10-21T19:54:43Z" | "2020-11-05T01:31:57Z" | "2020-11-05T01:31:57Z" | NONE | null | null | null | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/751/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2020-10-21T15:16:41Z" | "2020-10-29T09:36:01Z" | "2020-10-29T09:36:01Z" | CONTRIBUTOR | null | null | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | 15 | "2020-10-21T10:51:36Z" | "2022-09-30T11:35:30Z" | "2021-01-06T10:02:55Z" | MEMBER | null | null | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/748/comments | https://api.github.com/repos/huggingface/datasets/issues/748/events | https://github.com/huggingface/datasets/pull/748 | 726,196,589 | MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3 | 748 | New version of CompGuessWhat?! with refined annotations | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [] | closed | false | null | [] | null | 1 | "2020-10-21T06:55:41Z" | "2020-10-21T08:52:42Z" | "2020-10-21T08:46:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/748",
"merged_at": "2020-10-21T08:46:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/748"
} | This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/748/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/747/comments | https://api.github.com/repos/huggingface/datasets/issues/747/events | https://github.com/huggingface/datasets/pull/747 | 725,884,704 | MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4 | 747 | Add Quail question answering dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sai-prasanna",
"id": 3595526,
"login": "sai-prasanna",
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sai-prasanna"
} | [] | closed | false | null | [] | null | 0 | "2020-10-20T19:33:14Z" | "2020-10-21T08:35:15Z" | "2020-10-21T08:35:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/747",
"merged_at": "2020-10-21T08:35:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/747"
} | QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).
https://text-machine-lab.github.io/blog/2020/quail/ @annargrs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/747/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/746/comments | https://api.github.com/repos/huggingface/datasets/issues/746/events | https://github.com/huggingface/datasets/pull/746 | 725,627,235 | MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw | 746 | dataset(ngt): add ngt dataset initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 0 | "2020-10-20T14:04:58Z" | "2021-03-23T06:19:38Z" | "2021-03-23T06:19:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/746.diff",
"html_url": "https://github.com/huggingface/datasets/pull/746",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/746.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/746"
} | Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/746/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/745/comments | https://api.github.com/repos/huggingface/datasets/issues/745/events | https://github.com/huggingface/datasets/pull/745 | 725,589,352 | MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0 | 745 | Fix emotion description | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | 1 | "2020-10-20T13:28:39Z" | "2021-04-22T14:47:31Z" | "2020-10-21T08:38:27Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/745.diff",
"html_url": "https://github.com/huggingface/datasets/pull/745",
"merged_at": "2020-10-21T08:38:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/745.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/745"
} | Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/745/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/744/comments | https://api.github.com/repos/huggingface/datasets/issues/744/events | https://github.com/huggingface/datasets/issues/744 | 724,918,448 | MDU6SXNzdWU3MjQ5MTg0NDg= | 744 | Dataset Explorer Doesn't Work for squad_es and squad_it | {
"avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4",
"events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}",
"followers_url": "https://api.github.com/users/gaotongxiao/followers",
"following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}",
"gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gaotongxiao",
"id": 22607038,
"login": "gaotongxiao",
"node_id": "MDQ6VXNlcjIyNjA3MDM4",
"organizations_url": "https://api.github.com/users/gaotongxiao/orgs",
"received_events_url": "https://api.github.com/users/gaotongxiao/received_events",
"repos_url": "https://api.github.com/users/gaotongxiao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gaotongxiao"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 1 | "2020-10-19T19:34:12Z" | "2020-10-26T16:36:17Z" | "2020-10-26T16:36:17Z" | NONE | null | null | null | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/744/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/743/comments | https://api.github.com/repos/huggingface/datasets/issues/743/events | https://github.com/huggingface/datasets/issues/743 | 724,703,980 | MDU6SXNzdWU3MjQ3MDM5ODA= | 743 | load_dataset for CSV files not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4",
"events_url": "https://api.github.com/users/iliemihai/events{/privacy}",
"followers_url": "https://api.github.com/users/iliemihai/followers",
"following_url": "https://api.github.com/users/iliemihai/following{/other_user}",
"gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliemihai",
"id": 2815308,
"login": "iliemihai",
"node_id": "MDQ6VXNlcjI4MTUzMDg=",
"organizations_url": "https://api.github.com/users/iliemihai/orgs",
"received_events_url": "https://api.github.com/users/iliemihai/received_events",
"repos_url": "https://api.github.com/users/iliemihai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliemihai"
} | [] | open | false | null | [] | null | 22 | "2020-10-19T14:53:51Z" | "2022-11-28T16:59:36Z" | null | CONTRIBUTOR | null | null | null | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/743/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JetRunner",
"id": 22514219,
"login": "JetRunner",
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JetRunner"
} | [] | closed | false | null | [] | null | 1 | "2020-10-19T11:06:33Z" | "2020-10-22T16:19:49Z" | "2020-10-22T16:19:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"merged_at": "2020-10-22T16:19:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742"
} | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/741/comments | https://api.github.com/repos/huggingface/datasets/issues/741/events | https://github.com/huggingface/datasets/issues/741 | 723,924,275 | MDU6SXNzdWU3MjM5MjQyNzU= | 741 | Creating dataset consumes too much memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 20 | "2020-10-18T06:07:06Z" | "2022-02-15T17:03:10Z" | "2022-02-15T17:03:10Z" | CONTRIBUTOR | null | null | null | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.

If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:

And it is only using one CPU core, which is probably why it's so slow:

| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/741/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/740/comments | https://api.github.com/repos/huggingface/datasets/issues/740/events | https://github.com/huggingface/datasets/pull/740 | 723,047,958 | MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0 | 740 | Fix TREC urls | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2020-10-16T09:11:28Z" | "2020-10-19T08:54:37Z" | "2020-10-19T08:54:36Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/740",
"merged_at": "2020-10-19T08:54:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/740"
} | The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/740/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2020-10-16T09:05:49Z" | "2020-11-26T14:02:50Z" | "2020-11-26T14:02:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"merged_at": "2020-11-26T14:02:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739"
} | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/738/comments | https://api.github.com/repos/huggingface/datasets/issues/738/events | https://github.com/huggingface/datasets/pull/738 | 723,033,923 | MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4 | 738 | Replace seqeval code with original classification_report for simplicity | {
"avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4",
"events_url": "https://api.github.com/users/Hironsan/events{/privacy}",
"followers_url": "https://api.github.com/users/Hironsan/followers",
"following_url": "https://api.github.com/users/Hironsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hironsan",
"id": 6737785,
"login": "Hironsan",
"node_id": "MDQ6VXNlcjY3Mzc3ODU=",
"organizations_url": "https://api.github.com/users/Hironsan/orgs",
"received_events_url": "https://api.github.com/users/Hironsan/received_events",
"repos_url": "https://api.github.com/users/Hironsan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hironsan"
} | [] | closed | false | null | [] | null | 3 | "2020-10-16T08:51:45Z" | "2021-01-21T16:07:15Z" | "2020-10-19T10:31:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/738",
"merged_at": "2020-10-19T10:31:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/738"
} | Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/738/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/737/comments | https://api.github.com/repos/huggingface/datasets/issues/737/events | https://github.com/huggingface/datasets/issues/737 | 722,463,923 | MDU6SXNzdWU3MjI0NjM5MjM= | 737 | Trec Dataset Connection Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4",
"events_url": "https://api.github.com/users/aychang95/events{/privacy}",
"followers_url": "https://api.github.com/users/aychang95/followers",
"following_url": "https://api.github.com/users/aychang95/following{/other_user}",
"gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aychang95",
"id": 10554495,
"login": "aychang95",
"node_id": "MDQ6VXNlcjEwNTU0NDk1",
"organizations_url": "https://api.github.com/users/aychang95/orgs",
"received_events_url": "https://api.github.com/users/aychang95/received_events",
"repos_url": "https://api.github.com/users/aychang95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aychang95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aychang95"
} | [] | closed | false | null | [] | null | 1 | "2020-10-15T15:57:53Z" | "2020-10-19T08:54:36Z" | "2020-10-19T08:54:36Z" | NONE | null | null | null | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/737/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/736/comments | https://api.github.com/repos/huggingface/datasets/issues/736/events | https://github.com/huggingface/datasets/pull/736 | 722,348,191 | MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy | 736 | Start community-provided dataset docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | null | [] | null | 5 | "2020-10-15T13:41:39Z" | "2020-10-23T13:15:28Z" | "2020-10-23T13:15:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/736.diff",
"html_url": "https://github.com/huggingface/datasets/pull/736",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/736.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/736"
} | This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/736/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/735/comments | https://api.github.com/repos/huggingface/datasets/issues/735/events | https://github.com/huggingface/datasets/issues/735 | 722,225,270 | MDU6SXNzdWU3MjIyMjUyNzA= | 735 | Throw error when an unexpected key is used in data_files | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [] | closed | false | null | [] | null | 1 | "2020-10-15T10:55:27Z" | "2020-10-30T13:23:52Z" | "2020-10-30T13:23:52Z" | CONTRIBUTOR | null | null | null | I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/735/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2020-10-14T20:44:14Z" | "2020-10-15T09:27:43Z" | "2020-10-15T09:27:42Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"merged_at": "2020-10-15T09:27:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734"
} | Small typo: the description says translation instead of prediction. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/733/comments | https://api.github.com/repos/huggingface/datasets/issues/733/events | https://github.com/huggingface/datasets/pull/733 | 721,366,744 | MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw | 733 | Update link to dataset viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4",
"events_url": "https://api.github.com/users/negedng/events{/privacy}",
"followers_url": "https://api.github.com/users/negedng/followers",
"following_url": "https://api.github.com/users/negedng/following{/other_user}",
"gists_url": "https://api.github.com/users/negedng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/negedng",
"id": 12969168,
"login": "negedng",
"node_id": "MDQ6VXNlcjEyOTY5MTY4",
"organizations_url": "https://api.github.com/users/negedng/orgs",
"received_events_url": "https://api.github.com/users/negedng/received_events",
"repos_url": "https://api.github.com/users/negedng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/negedng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/negedng"
} | [] | closed | false | null | [] | null | 0 | "2020-10-14T11:13:23Z" | "2020-10-14T14:07:31Z" | "2020-10-14T14:07:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/733.diff",
"html_url": "https://github.com/huggingface/datasets/pull/733",
"merged_at": "2020-10-14T14:07:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/733.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/733"
} | Change 404 error links in quick tour to working ones | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/733/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/732/comments | https://api.github.com/repos/huggingface/datasets/issues/732/events | https://github.com/huggingface/datasets/pull/732 | 721,359,448 | MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy | 732 | dataset(wlasl): initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 2 | "2020-10-14T11:01:42Z" | "2021-03-23T06:19:43Z" | "2021-03-23T06:19:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/732.diff",
"html_url": "https://github.com/huggingface/datasets/pull/732",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/732.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/732"
} | takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/732/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/731/comments | https://api.github.com/repos/huggingface/datasets/issues/731/events | https://github.com/huggingface/datasets/pull/731 | 721,142,985 | MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4 | 731 | dataset(aslg_pc12): initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [] | closed | false | null | [] | null | 3 | "2020-10-14T05:14:37Z" | "2020-10-28T15:27:06Z" | "2020-10-28T15:27:06Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/731.diff",
"html_url": "https://github.com/huggingface/datasets/pull/731",
"merged_at": "2020-10-28T15:27:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/731.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/731"
} | This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/731/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | {
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArneBinder",
"id": 3375489,
"login": "ArneBinder",
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArneBinder"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 7 | "2020-10-14T02:02:34Z" | "2022-11-22T01:45:54Z" | "2020-10-29T09:36:01Z" | NONE | null | null | null | The following code with `test1.txt` containing just "๐ค๐ค๐ค":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'รฐ\x9fยค\x97รฐ\x9fยค\x97รฐ\x9fยค\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'รฐ\x9fยค\x97รฐ\x9fยค\x97รฐ\x9fยค\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '๐ค๐ค๐ค'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '๐ค๐ค๐ค'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/729/comments | https://api.github.com/repos/huggingface/datasets/issues/729/events | https://github.com/huggingface/datasets/issues/729 | 719,558,876 | MDU6SXNzdWU3MTk1NTg4NzY= | 729 | Better error message when one forgets to call `add_batch` before `compute` | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2020-10-12T17:59:22Z" | "2020-10-29T15:18:24Z" | "2020-10-29T15:18:24Z" | CONTRIBUTOR | null | null | null | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/729/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | 0 | "2020-10-12T17:55:14Z" | "2020-10-29T09:34:42Z" | "2020-10-29T09:34:42Z" | CONTRIBUTOR | null | null | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | 0 | "2020-10-12T13:36:05Z" | "2020-10-12T13:36:05Z" | null | MEMBER | null | null | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SparkJiao",
"id": 16469472,
"login": "SparkJiao",
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SparkJiao"
} | [] | closed | false | null | [] | null | 8 | "2020-10-12T11:45:10Z" | "2022-02-17T17:53:54Z" | "2022-02-15T10:38:57Z" | NONE | null | null | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/725/comments | https://api.github.com/repos/huggingface/datasets/issues/725/events | https://github.com/huggingface/datasets/pull/725 | 718,985,641 | MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1 | 725 | pretty print dataset objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 2 | "2020-10-12T02:03:46Z" | "2020-10-23T16:24:35Z" | "2020-10-23T09:00:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/725.diff",
"html_url": "https://github.com/huggingface/datasets/pull/725",
"merged_at": "2020-10-23T09:00:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/725.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/725"
} | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':
Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':
Dataset(features: {'text': Value(dtype='string', id=None), 'headline':
Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)},
num_rows: 5577)})
```
This is not very readable.
Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object?
Here is my very simple attempt. With this PR, it produces:
```
DatasetDict({
train: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 157252
})
validation: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5599
})
test: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5577
})
})
```
I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.
note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.
I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/725/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 4 | "2020-10-11T23:12:12Z" | "2020-10-14T17:00:12Z" | "2020-10-14T17:00:12Z" | CONTRIBUTOR | null | null | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | completed | false |
Subsets and Splits