url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6130/comments | https://api.github.com/repos/huggingface/datasets/issues/6130/events | https://github.com/huggingface/datasets/issues/6130 | 1,843,158,846 | I_kwDODunzps5t3F8- | 6,130 | default config name doesn't work when config kwargs are specified. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo"
} | [] | closed | false | null | [] | null | 15 | "2023-08-09T12:43:15Z" | "2023-11-22T11:50:49Z" | "2023-11-22T11:50:48Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522
If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('/dataset/with/multiple/config'') # Ok
datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err
```
### Expected behavior
Default config behavior should be consistent.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6130/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6129/comments | https://api.github.com/repos/huggingface/datasets/issues/6129/events | https://github.com/huggingface/datasets/pull/6129 | 1,841,563,517 | PR_kwDODunzps5Xcmuw | 6,129 | Release 2.14.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 5 | "2023-08-08T15:43:56Z" | "2023-08-08T16:08:22Z" | "2023-08-08T15:49:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6129.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6129",
"merged_at": "2023-08-08T15:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6129.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6129"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6129/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6128/comments | https://api.github.com/repos/huggingface/datasets/issues/6128/events | https://github.com/huggingface/datasets/issues/6128 | 1,841,545,493 | I_kwDODunzps5tw8EV | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TomasAndersonFang",
"id": 38727343,
"login": "TomasAndersonFang",
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TomasAndersonFang"
} | [] | closed | false | null | [] | null | 5 | "2023-08-08T15:32:08Z" | "2023-12-26T07:51:57Z" | "2023-08-11T13:35:09Z" | NONE | null | null | null | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
AutoConfig,
DataCollatorForSeq2Seq,
Trainer,
Seq2SeqTrainer,
HfArgumentParser,
Seq2SeqTrainingArguments,
BitsAndBytesConfig,
)
from peft import (
LoraConfig,
get_peft_model,
get_peft_model_state_dict,
prepare_model_for_int8_training,
set_peft_model_state_dict,
)
import torch
import os
import evaluate
import functools
from datasets import load_dataset
import bitsandbytes as bnb
import logging
import json
import copy
from typing import Dict, Optional, Sequence
from dataclasses import dataclass, field
# Lora settings
LORA_R = 8
LORA_ALPHA = 16
LORA_DROPOUT= 0.05
LORA_TARGET_MODULES = ["query_key_value"]
@dataclass
class ModelArguments:
model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B")
@dataclass
class DataArguments:
data_path: str = field(default=None, metadata={"help": "Path to the training data."})
train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."})
cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."})
num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."})
@dataclass
class TrainingArguments(transformers.TrainingArguments):
# cache_dir: Optional[str] = field(default=None)
optim: str = field(default="adamw_torch")
model_max_length: int = field(
default=512,
metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."},
)
is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."})
def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True):
result = tokenizer(
text,
truncation=True,
max_length=max_seq_len,
padding=False,
return_tensors=None,
)
if (
result["input_ids"][-1] != tokenizer.eos_token_id
and len(result["input_ids"]) < max_seq_len
and add_eos_token
):
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
if add_eos_token and len(result["input_ids"]) >= max_seq_len:
result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id
result["attention_mask"][max_seq_len - 1] = 1
result["labels"] = result["input_ids"].copy()
return result
def main():
parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
config = AutoConfig.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
if training_args.is_lora:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
torch_dtype=torch.float16,
trust_remote_code=True,
load_in_8bit=True,
quantization_config=BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
),
)
model = prepare_model_for_int8_training(model)
config = LoraConfig(
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=LORA_TARGET_MODULES,
lora_dropout=LORA_DROPOUT,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
torch_dtype=torch.float16,
cache_dir=data_args.cache_path,
trust_remote_code=True,
)
model.config.use_cache = False
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
print_trainable_parameters(model)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path,
cache_dir=data_args.cache_path,
model_max_length=training_args.model_max_length,
padding_side="left",
use_fast=True,
trust_remote_code=True,
)
tokenizer.pad_token = tokenizer.eos_token
# Load dataset
def generate_and_tokenize_prompt(sample):
input_text = sample["input"]
target_text = sample["output"] + tokenizer.eos_token
full_text = input_text + target_text
tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512)
tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512)
input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token
tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:]
return tokenized_full_text
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.eval_file is not None:
data_files["eval"] = data_args.eval_file
dataset = load_dataset(data_args.data_path, data_files=data_files)
train_dataset = dataset["train"]
eval_dataset = dataset["eval"]
train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc)
data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True)
# Evaluation metrics
def compute_metrics(eval_preds, tokenizer):
metric = evaluate.load('exact_match')
preds, labels = eval_preds
# In case the model returns more than the prediction logits
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Replace -100s in the labels as we can't decode them
labels[labels == -100] = tokenizer.pad_token_id
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False)
# Some simple post-processing
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [label.strip() for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
return {'exact_match': result['exact_match']}
compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer)
model = torch.compile(model)
# Training
trainer = Trainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
data_collator=data_collator,
compute_metrics=compute_metrics_fn,
)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
tokenizer.save_pretrained(save_directory=training_args.output_dir)
if __name__ == "__main__":
main()
```
When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error:
```
Traceback (most recent call last):
File "falcon_sft.py", line 230, in <module>
main()
File "falcon_sft.py", line 223, in main
trainer.train()
File "python3.10/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__
current_batch = next(dataloader_iter)
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__
batch = self.__getitem__(keys)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 88 is out of bounds for size 0
```
So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`?
### Expected behavior
I want to use `torch.compile` in my code.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6128/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6127/comments | https://api.github.com/repos/huggingface/datasets/issues/6127/events | https://github.com/huggingface/datasets/pull/6127 | 1,839,746,721 | PR_kwDODunzps5XWdP5 | 6,127 | Fix authentication issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 8 | "2023-08-07T15:41:25Z" | "2023-08-08T15:24:59Z" | "2023-08-08T15:16:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6127",
"merged_at": "2023-08-08T15:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6127"
} | This PR fixes 3 authentication issues:
- Fix authentication when passing `token`.
- Fix authentication in `Audio.decode_example` and `Image.decode_example`.
- Fix authentication to resolve `data_files` in repositories without script.
This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`.
Fix #6126.
## Details
### Fix authentication when passing `token`
See c0a77dc943de68a17f23f141517028c734c78623
The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`:
```python
download_config.token = token
```
As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`.
This fixes authentication issues in the following functions:
- `load_dataset` and `load_dataset_builder`
- `Dataset.push_to_hub` and `Dataset.push_to_hub`
- `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names`
### Fix authentication in `Audio.decode_example` and `Image.decode_example`.
See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f
The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`)
### Fix authentication to resolve `data_files` in repositories without script
See: e4684fc1032321abf0d494b0c130ea7c82ebda80
This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6126/comments | https://api.github.com/repos/huggingface/datasets/issues/6126/events | https://github.com/huggingface/datasets/issues/6126 | 1,839,675,320 | I_kwDODunzps5tpze4 | 6,126 | Private datasets do not load when passing token | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 4 | "2023-08-07T15:06:47Z" | "2023-08-08T15:16:23Z" | "2023-08-08T15:16:23Z" | MEMBER | null | null | null | ### Describe the bug
Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`.
This is a non-planned backward incompatible breaking change.
Note that private datasets do load if instead `download_config` is passed:
```python
from datasets import DownloadConfig, load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>"))
ds
```
gives
```
Dataset({
features: ['text'],
num_rows: 4
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
```
gives
```
---------------------------------------------------------------------------
EmptyDatasetError Traceback (most recent call last)
[<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>")
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2107
2108 # Create a dataset builder
-> 2109 builder_instance = load_dataset_builder(
2110 path=path,
2111 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1793 download_config = download_config.copy() if download_config else DownloadConfig()
1794 download_config.storage_options.update(storage_options)
-> 1795 dataset_module = dataset_module_factory(
1796 path,
1797 revision=revision,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
1485 if isinstance(e1, EmptyDatasetError):
-> 1486 raise e1 from None
1487 if isinstance(e1, FileNotFoundError):
1488 raise FileNotFoundError(
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1474 download_config=download_config,
1475 download_mode=download_mode,
-> 1476 ).get_module()
1477 except (
1478 Exception
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
1030 sanitize_patterns(self.data_files)
1031 if self.data_files is not None
-> 1032 else get_data_patterns(base_path, download_config=self.download_config)
1033 )
1034 data_files = DataFilesDict.from_patterns(
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config)
457 return _get_data_files_patterns(resolver)
458 except FileNotFoundError:
--> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
460
461
EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files
```
### Expected behavior
The dataset should load.
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6126/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6125/comments | https://api.github.com/repos/huggingface/datasets/issues/6125/events | https://github.com/huggingface/datasets/issues/6125 | 1,837,980,986 | I_kwDODunzps5tjV06 | 6,125 | Reinforcement Learning and Robotics are not task categories in HF datasets metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4",
"events_url": "https://api.github.com/users/StoneT2000/events{/privacy}",
"followers_url": "https://api.github.com/users/StoneT2000/followers",
"following_url": "https://api.github.com/users/StoneT2000/following{/other_user}",
"gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StoneT2000",
"id": 35373228,
"login": "StoneT2000",
"node_id": "MDQ6VXNlcjM1MzczMjI4",
"organizations_url": "https://api.github.com/users/StoneT2000/orgs",
"received_events_url": "https://api.github.com/users/StoneT2000/received_events",
"repos_url": "https://api.github.com/users/StoneT2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StoneT2000"
} | [] | closed | false | null | [] | null | 0 | "2023-08-05T23:59:42Z" | "2023-08-18T12:28:42Z" | "2023-08-18T12:28:42Z" | NONE | null | null | null | ### Describe the bug
In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets
Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags
Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves.
### Steps to reproduce the bug
1. Create a new dataset on Hugging face
2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit
### Expected behavior
Expected to be able to add RL and robotics as task categories as some previous datasets have these tags
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6125/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6124/comments | https://api.github.com/repos/huggingface/datasets/issues/6124/events | https://github.com/huggingface/datasets/issues/6124 | 1,837,868,112 | I_kwDODunzps5ti6RQ | 6,124 | Datasets crashing runs due to KeyError | {
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/conceptofmind",
"id": 25208228,
"login": "conceptofmind",
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"type": "User",
"url": "https://api.github.com/users/conceptofmind"
} | [] | closed | false | null | [] | null | 7 | "2023-08-05T17:48:56Z" | "2023-11-30T16:28:57Z" | "2023-11-30T16:28:57Z" | NONE | null | null | null | ### Describe the bug
Hi all,
I have been running into a pretty persistent issue recently when trying to load datasets.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
I receive a KeyError which crashes the runs.
```
Traceback (most recent call last):
main()
train_dataset = load_dataset(
^^^^^^^^^^^^^
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
raise e1 from None
).get_module()
^^^^^^^^^^^^
else get_data_patterns(base_path, download_config=self.download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return _get_data_files_patterns(resolver)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
data_files = pattern_resolver(pattern)
^^^^^^^^^^^^^^^^^^^^^^^^^
fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)]
^^^^^^^^^^^^^^
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):
listing = self.ls(path, detail=True, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"last_modified": parse_datetime(tree_item["lastCommit"]["date"]),
~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'lastCommit'
```
Any help would be greatly appreciated.
Thank you,
Enrico
### Steps to reproduce the bug
Load the dataset from the Huggingface hub.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
### Expected behavior
Loads the dataset.
### Environment info
datasets-2.14.3
CUDA 11.8
Python 3.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6124/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6123/comments | https://api.github.com/repos/huggingface/datasets/issues/6123/events | https://github.com/huggingface/datasets/issues/6123 | 1,837,789,294 | I_kwDODunzps5tinBu | 6,123 | Inaccurate Bounding Boxes in "wildreceipt" Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4",
"events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}",
"followers_url": "https://api.github.com/users/HamzaGbada/followers",
"following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}",
"gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HamzaGbada",
"id": 50714796,
"login": "HamzaGbada",
"node_id": "MDQ6VXNlcjUwNzE0Nzk2",
"organizations_url": "https://api.github.com/users/HamzaGbada/orgs",
"received_events_url": "https://api.github.com/users/HamzaGbada/received_events",
"repos_url": "https://api.github.com/users/HamzaGbada/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HamzaGbada"
} | [] | closed | false | null | [] | null | 1 | "2023-08-05T14:34:13Z" | "2023-08-17T14:25:27Z" | "2023-08-17T14:25:26Z" | NONE | null | null | null | ### Describe the bug
I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset.
To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face:
**Example 1:**



**Example 2:**



It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar).
This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated.
### Steps to reproduce the bug
```python
import matplotlib.pyplot as plt
from datasets import load_dataset
# Define functions to convert bounding box formats
def convert_format1(box):
x, y, w, h = box
x2, y2 = x + w, y + h
return [x, y, x2, y2]
def convert_format2(box):
x1, y1, x2, y2 = box
return [x1, y1, x2, y2]
def plot_cropped_image(image, box, title):
cropped_image = image.crop(box)
plt.imshow(cropped_image)
plt.title(title)
plt.axis('off')
plt.savefig(title+'.png')
plt.show()
doc_index = 1
word_index = 3
dataset = load_dataset("Theivaprakasham/wildreceipt")['train']
bbox_hugging_face = dataset[doc_index]['bboxes'][word_index]
text_unit_face = dataset[doc_index]['words'][word_index]
common_box_hugface_1 = convert_format1(bbox_hugging_face)
common_box_hugface_2 = convert_format2(bbox_hugging_face)
plot_cropped_image(image_hugging, common_box_hugface_1,
f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}')
plot_cropped_image(image_hugging, common_box_hugface_2,
f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}')
```
### Expected behavior
The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset.
### Environment info
- Python version: 3.8
- Hugging Face datasets version: 2.14.2
- Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6123/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6122/comments | https://api.github.com/repos/huggingface/datasets/issues/6122/events | https://github.com/huggingface/datasets/issues/6122 | 1,837,335,721 | I_kwDODunzps5tg4Sp | 6,122 | Upload README via `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liyucheng09",
"id": 27999909,
"login": "liyucheng09",
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liyucheng09"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | "2023-08-04T21:00:27Z" | "2023-08-21T18:18:54Z" | "2023-08-21T18:18:54Z" | NONE | null | null | null | ### Feature request
`push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually.
However, I do discover snippets to intialize a README for every `push_to_hub`:
```
dataset_card = (
DatasetCard(
"---\n"
+ str(dataset_card_data)
+ "\n---\n"
+ f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)'
)
if dataset_card is None
else dataset_card
)
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
path_or_fileobj=str(dataset_card).encode(),
path_in_repo="README.md",
repo_id=repo_id,
token=token,
repo_type="dataset",
revision=branch,
)
```
So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation.
### Motivation
as elabrated above.
### Your contribution
I might be able to make a pr. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6122/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6121/comments | https://api.github.com/repos/huggingface/datasets/issues/6121/events | https://github.com/huggingface/datasets/pull/6121 | 1,836,761,712 | PR_kwDODunzps5XMsWd | 6,121 | Small typo in the code example of create imagefolder dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4",
"events_url": "https://api.github.com/users/WangXin93/events{/privacy}",
"followers_url": "https://api.github.com/users/WangXin93/followers",
"following_url": "https://api.github.com/users/WangXin93/following{/other_user}",
"gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WangXin93",
"id": 19688994,
"login": "WangXin93",
"node_id": "MDQ6VXNlcjE5Njg4OTk0",
"organizations_url": "https://api.github.com/users/WangXin93/orgs",
"received_events_url": "https://api.github.com/users/WangXin93/received_events",
"repos_url": "https://api.github.com/users/WangXin93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WangXin93"
} | [] | closed | false | null | [] | null | 1 | "2023-08-04T13:36:59Z" | "2023-08-04T13:45:32Z" | "2023-08-04T13:41:43Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6121",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6121"
} | Fix type of code example of load imagefolder dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6120/comments | https://api.github.com/repos/huggingface/datasets/issues/6120/events | https://github.com/huggingface/datasets/issues/6120 | 1,836,026,938 | I_kwDODunzps5tb4w6 | 6,120 | Lookahead streaming support? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4",
"events_url": "https://api.github.com/users/PicoCreator/events{/privacy}",
"followers_url": "https://api.github.com/users/PicoCreator/followers",
"following_url": "https://api.github.com/users/PicoCreator/following{/other_user}",
"gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PicoCreator",
"id": 17175484,
"login": "PicoCreator",
"node_id": "MDQ6VXNlcjE3MTc1NDg0",
"organizations_url": "https://api.github.com/users/PicoCreator/orgs",
"received_events_url": "https://api.github.com/users/PicoCreator/received_events",
"repos_url": "https://api.github.com/users/PicoCreator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PicoCreator"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | "2023-08-04T04:01:52Z" | "2023-08-17T17:48:42Z" | null | NONE | null | null | null | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mapping instruction/tokenizer specific)
Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained.
With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches.
### Motivation
Faster streaming performance, while training over extra large TB sized datasets
### Your contribution
I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6120/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6119/comments | https://api.github.com/repos/huggingface/datasets/issues/6119/events | https://github.com/huggingface/datasets/pull/6119 | 1,835,996,350 | PR_kwDODunzps5XKI19 | 6,119 | [Docs] Add description of `select_columns` to guide | {
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh"
} | [] | closed | false | null | [] | null | 2 | "2023-08-04T03:13:30Z" | "2023-08-16T10:13:02Z" | "2023-08-16T10:02:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6119",
"merged_at": "2023-08-16T10:02:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6119"
} | Closes #6116 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6118/comments | https://api.github.com/repos/huggingface/datasets/issues/6118/events | https://github.com/huggingface/datasets/issues/6118 | 1,835,940,417 | I_kwDODunzps5tbjpB | 6,118 | IterableDataset.from_generator() fails with pickle error when provided a generator or iterator | {
"avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4",
"events_url": "https://api.github.com/users/finkga/events{/privacy}",
"followers_url": "https://api.github.com/users/finkga/followers",
"following_url": "https://api.github.com/users/finkga/following{/other_user}",
"gists_url": "https://api.github.com/users/finkga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finkga",
"id": 1281051,
"login": "finkga",
"node_id": "MDQ6VXNlcjEyODEwNTE=",
"organizations_url": "https://api.github.com/users/finkga/orgs",
"received_events_url": "https://api.github.com/users/finkga/received_events",
"repos_url": "https://api.github.com/users/finkga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finkga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finkga"
} | [] | open | false | null | [] | null | 2 | "2023-08-04T01:45:04Z" | "2023-12-04T09:28:50Z" | null | NONE | null | null | null | ### Describe the bug
**Description**
Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator.
**Code example**
```
def line_generator(files: List[Path]):
if isinstance(files, str):
files = [Path(files)]
for file in files:
if isinstance(file, str):
file = Path(file)
yield from open(file,'r').readlines()
...
model_training_files = ['file1.txt', 'file2.txt', 'file3.txt']
train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files))
```
**Traceback**
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields
yield
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps
dump(obj, file)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump
Pickler(file, recurse=True).dump(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump
self.save(obj)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems
save(v)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'generator' object
### Steps to reproduce the bug
1. Create a set of text files to iterate over.
2. Create a generator that returns the lines in each file until all files are exhausted.
3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator().
4. Wait for the explosion.
### Expected behavior
I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function.
### Environment info
datasets.__version__ == '2.13.1'
Python 3.9.6
Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6118/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6117/comments | https://api.github.com/repos/huggingface/datasets/issues/6117/events | https://github.com/huggingface/datasets/pull/6117 | 1,835,213,848 | PR_kwDODunzps5XHktw | 6,117 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-08-03T14:46:04Z" | "2023-08-03T14:56:59Z" | "2023-08-03T14:46:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6117.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6117",
"merged_at": "2023-08-03T14:46:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6117.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6117"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6117/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6116/comments | https://api.github.com/repos/huggingface/datasets/issues/6116/events | https://github.com/huggingface/datasets/issues/6116 | 1,835,098,484 | I_kwDODunzps5tYWF0 | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | {
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | "2023-08-03T13:45:10Z" | "2023-08-16T10:02:53Z" | "2023-08-16T10:02:53Z" | CONTRIBUTOR | null | null | null | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.
### Motivation
This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.
Mentioning it in the guide would help future users discover this added feature.
### Your contribution
I could submit a PR to add a brief description of the function to said guide. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6116/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6115/comments | https://api.github.com/repos/huggingface/datasets/issues/6115/events | https://github.com/huggingface/datasets/pull/6115 | 1,834,765,485 | PR_kwDODunzps5XGChP | 6,115 | Release: 2.14.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 6 | "2023-08-03T10:18:32Z" | "2023-08-03T15:08:02Z" | "2023-08-03T10:24:57Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6115.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6115",
"merged_at": "2023-08-03T10:24:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6115.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6115"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6115/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6114/comments | https://api.github.com/repos/huggingface/datasets/issues/6114/events | https://github.com/huggingface/datasets/issues/6114 | 1,834,015,584 | I_kwDODunzps5tUNtg | 6,114 | Cache not being used when loading commonvoice 8.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4",
"events_url": "https://api.github.com/users/clabornd/events{/privacy}",
"followers_url": "https://api.github.com/users/clabornd/followers",
"following_url": "https://api.github.com/users/clabornd/following{/other_user}",
"gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clabornd",
"id": 31082141,
"login": "clabornd",
"node_id": "MDQ6VXNlcjMxMDgyMTQx",
"organizations_url": "https://api.github.com/users/clabornd/orgs",
"received_events_url": "https://api.github.com/users/clabornd/received_events",
"repos_url": "https://api.github.com/users/clabornd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clabornd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clabornd"
} | [] | closed | false | null | [] | null | 2 | "2023-08-02T23:18:11Z" | "2023-08-18T23:59:00Z" | "2023-08-18T23:59:00Z" | NONE | null | null | null | ### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially:
```
dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")
```
it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32`
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
2. dataset is updated by maintainers
3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")```
### Expected behavior
I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`.
Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded?
EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example:
```
load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en")
> ...
> File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str)
1937 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
...
1794 e = e.__context__
-> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
datasets==2.7.0
python==3.10.8
OS: AWS Linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6114/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6113/comments | https://api.github.com/repos/huggingface/datasets/issues/6113/events | https://github.com/huggingface/datasets/issues/6113 | 1,833,854,030 | I_kwDODunzps5tTmRO | 6,113 | load_dataset() fails with streamlit caching inside docker | {
"avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4",
"events_url": "https://api.github.com/users/fierval/events{/privacy}",
"followers_url": "https://api.github.com/users/fierval/followers",
"following_url": "https://api.github.com/users/fierval/following{/other_user}",
"gists_url": "https://api.github.com/users/fierval/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fierval",
"id": 987574,
"login": "fierval",
"node_id": "MDQ6VXNlcjk4NzU3NA==",
"organizations_url": "https://api.github.com/users/fierval/orgs",
"received_events_url": "https://api.github.com/users/fierval/received_events",
"repos_url": "https://api.github.com/users/fierval/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fierval/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fierval"
} | [] | closed | false | null | [] | null | 1 | "2023-08-02T20:20:26Z" | "2023-08-21T18:18:27Z" | "2023-08-21T18:18:27Z" | NONE | null | null | null | ### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 62, in <module>
dashboard()
File "/home/user/app/app.py", line 47, in dashboard
feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper
return cached_func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/home/user/app/hf_interface.py", line 16, in load_data
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module
else get_data_patterns(base_path, download_config=self.download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
### Steps to reproduce the bug
```python
@st.cache_resource
def load_data(repo_id: str, hf_token=None):
"""Load data from HuggingFace Hub
"""
hf_dataset = load_dataset(repo_id, use_auth_token=hf_token)
hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"])
return hf_dataset
```
### Expected behavior
Expect to load.
Note: works fine with datasets==2.13.1
### Environment info
datasets==2.14.2,
Ubuntu bionic-based Docker container. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6113/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6112/comments | https://api.github.com/repos/huggingface/datasets/issues/6112/events | https://github.com/huggingface/datasets/issues/6112 | 1,833,693,299 | I_kwDODunzps5tS_Bz | 6,112 | yaml error using push_to_hub with generated README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4",
"events_url": "https://api.github.com/users/kevintee/events{/privacy}",
"followers_url": "https://api.github.com/users/kevintee/followers",
"following_url": "https://api.github.com/users/kevintee/following{/other_user}",
"gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kevintee",
"id": 1643887,
"login": "kevintee",
"node_id": "MDQ6VXNlcjE2NDM4ODc=",
"organizations_url": "https://api.github.com/users/kevintee/orgs",
"received_events_url": "https://api.github.com/users/kevintee/received_events",
"repos_url": "https://api.github.com/users/kevintee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevintee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kevintee"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | 1 | "2023-08-02T18:21:21Z" | "2023-12-12T15:00:44Z" | "2023-12-12T15:00:44Z" | NONE | null | null | null | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
```
and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error:
```
Traceback (most recent call last):
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status
response.raise_for_status()
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module>
build_dataset()
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset
push_to_hub(dataset, "multitask_document_classification_dataset")
File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub
dataset.push_to_hub(f"looppayments/{dataset_name}", private=True)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file
commit_info = self.create_commit(
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner
return fn(self, *args, **kwargs)
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit
hf_raise_for_status(commit_resp, endpoint_name="commit")
File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e)
Bad request for commit endpoint:
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9)
7 | - 3
8 | - 224
9 | - 224
10 | dtype: float64
--------------^
11 | - name: input_ids
12 | sequence: int64
```
My guess is that the auto-generated yaml is unable to be parsed for some reason.
### Steps to reproduce the bug
The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet:
```
from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value
from PIL import Image
from transformers import AutoProcessor
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"tokens": Sequence(Value(dtype="string")),
"bbox": Array2D(dtype="int64", shape=(512, 4)),
}
)
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
def preprocess_dataset(rows):
# Get images
images = [
Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"]
]
encoding = processor(
images,
rows["tokens"],
boxes=rows["bbox"],
truncation=True,
padding="max_length",
)
encoding["tokens"] = rows["tokens"]
return encoding
dataset = dataset.map(
preprocess_dataset,
batched=True,
batch_size=5,
features=features,
)
```
### Expected behavior
Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error.
### Environment info
- `datasets` version: 2.14.2
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6112/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6111/comments | https://api.github.com/repos/huggingface/datasets/issues/6111/events | https://github.com/huggingface/datasets/issues/6111 | 1,832,781,654 | I_kwDODunzps5tPgdW | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | {
"avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4",
"events_url": "https://api.github.com/users/2catycm/events{/privacy}",
"followers_url": "https://api.github.com/users/2catycm/followers",
"following_url": "https://api.github.com/users/2catycm/following{/other_user}",
"gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/2catycm",
"id": 41530341,
"login": "2catycm",
"node_id": "MDQ6VXNlcjQxNTMwMzQx",
"organizations_url": "https://api.github.com/users/2catycm/orgs",
"received_events_url": "https://api.github.com/users/2catycm/received_events",
"repos_url": "https://api.github.com/users/2catycm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2catycm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/2catycm"
} | [] | closed | false | null | [] | null | 3 | "2023-08-02T09:17:29Z" | "2023-08-29T02:00:28Z" | "2023-08-29T02:00:28Z" | NONE | null | null | null | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object.
However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects.
### Steps to reproduce the bug
Steps to reproduce the bug:
1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main
2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box:
```bash
cd my_directory_absolute
git lfs install
git clone https://huggingface.co/datasets/cifar100
ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK.
```
3. Write A python file to try to load the dataset
```python
from datasets import load_dataset, load_from_disk
dataset = load_from_disk("my_directory_absolute/cifar100")
```
Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead.
4. Then you will see the error reported:
```log
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[5], line 9
1 from datasets import load_dataset, load_from_disk
----> 9 dataset = load_from_disk("my_directory_absolute/cifar100")
File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)
2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
2231 else:
-> 2232 raise FileNotFoundError(
2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory."
2234 )
FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.
```
### Expected behavior
The dataset should be load successfully.
### Environment info
```bash
datasets-cli env
```
-> results:
```txt
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.14.2
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6111/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6110/comments | https://api.github.com/repos/huggingface/datasets/issues/6110/events | https://github.com/huggingface/datasets/issues/6110 | 1,831,110,633 | I_kwDODunzps5tJIfp | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | {
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MattYoon",
"id": 57797966,
"login": "MattYoon",
"node_id": "MDQ6VXNlcjU3Nzk3OTY2",
"organizations_url": "https://api.github.com/users/MattYoon/orgs",
"received_events_url": "https://api.github.com/users/MattYoon/received_events",
"repos_url": "https://api.github.com/users/MattYoon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MattYoon"
} | [] | closed | false | null | [] | null | 1 | "2023-08-01T11:58:58Z" | "2023-08-17T14:03:01Z" | "2023-08-17T14:03:00Z" | NONE | null | null | null | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was run the second time so the map function can be loaded from cache if exists
from datasets import load_dataset, Dataset
dataset = load_dataset("tatsu-lab/alpaca")['train']
dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map
print(len(dataset.cache_files))
# 1
# copy the exact same data but initialize from a dictionary
memory_dataset = Dataset.from_dict({
'instruction': dataset['instruction'],
'input': dataset['input'],
'output': dataset['output'],
'text': dataset['text']})
memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map
print(len(memory_dataset.cache_files))
# Map: 100%|██████████| 52002[/52002]
# 0
```
### Expected behavior
The `map` function should create cache regardless of the method the `Dataset` was created.
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6110/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6109/comments | https://api.github.com/repos/huggingface/datasets/issues/6109/events | https://github.com/huggingface/datasets/issues/6109 | 1,830,753,793 | I_kwDODunzps5tHxYB | 6,109 | Problems in downloading Amazon reviews from HF | {
"avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4",
"events_url": "https://api.github.com/users/610v4nn1/events{/privacy}",
"followers_url": "https://api.github.com/users/610v4nn1/followers",
"following_url": "https://api.github.com/users/610v4nn1/following{/other_user}",
"gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/610v4nn1",
"id": 52964960,
"login": "610v4nn1",
"node_id": "MDQ6VXNlcjUyOTY0OTYw",
"organizations_url": "https://api.github.com/users/610v4nn1/orgs",
"received_events_url": "https://api.github.com/users/610v4nn1/received_events",
"repos_url": "https://api.github.com/users/610v4nn1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/610v4nn1"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 1 | "2023-08-01T08:38:29Z" | "2023-08-02T07:12:07Z" | "2023-08-02T07:12:07Z" | NONE | null | null | null | ### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 928kB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.81MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it]
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s]
Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s]
```
the file is clearly too small to contain the requested dataset, in fact it contains en error message:
```
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error>
```
obviously the script fails:
```
> raise DatasetGenerationError("An error occurred while generating the dataset") from e
E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE")
### Expected behavior
I would expect the dataset to be downloaded and processed
### Environment info
* The problem is present with both datasets 2.12.0 and 2.14.2
* python version 3.10.12 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6109/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6108/comments | https://api.github.com/repos/huggingface/datasets/issues/6108/events | https://github.com/huggingface/datasets/issues/6108 | 1,830,347,187 | I_kwDODunzps5tGOGz | 6,108 | Loading local datasets got strangely stuck | {
"avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4",
"events_url": "https://api.github.com/users/LoveCatc/events{/privacy}",
"followers_url": "https://api.github.com/users/LoveCatc/followers",
"following_url": "https://api.github.com/users/LoveCatc/following{/other_user}",
"gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LoveCatc",
"id": 48412571,
"login": "LoveCatc",
"node_id": "MDQ6VXNlcjQ4NDEyNTcx",
"organizations_url": "https://api.github.com/users/LoveCatc/orgs",
"received_events_url": "https://api.github.com/users/LoveCatc/received_events",
"repos_url": "https://api.github.com/users/LoveCatc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LoveCatc"
} | [] | open | false | null | [] | null | 6 | "2023-08-01T02:28:06Z" | "2024-02-05T08:55:16Z" | null | NONE | null | null | null | ### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train']
```
However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way:
```python
dlist = list()
for _ in LIST_OF_FILE_PATHS:
dlist.append(load_dataset("json", data_files=_)['train'])
ds = concatenate_datasets(dlist)
```
I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error:
```bash
^C
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get
res = self._reader.recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Generating train split: 92431 examples [01:23, 1104.25 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module>
a = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split
for job_id, done, content in iflatmap_unordered(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get
raise TimeoutError
multiprocess.context.TimeoutError
```
I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram.
Thanks for your efforts and patience! Any suggestion or help would be appreciated.
### Steps to reproduce the bug
1. use load_dataset() with `data_files = LIST_OF_FILES`
### Expected behavior
All the files should be smoothly loaded.
### Environment info
- Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked.
- `datasets` version: 2.14.2
- Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609
- Pandas version: 1.5.2 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6108/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6107/comments | https://api.github.com/repos/huggingface/datasets/issues/6107/events | https://github.com/huggingface/datasets/pull/6107 | 1,829,625,320 | PR_kwDODunzps5W0rLR | 6,107 | Fix deprecation of use_auth_token in file_utils | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-07-31T16:32:01Z" | "2023-08-03T10:13:32Z" | "2023-08-03T10:04:18Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6107",
"merged_at": "2023-08-03T10:04:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6107"
} | Fix issues with the deprecation of `use_auth_token` introduced by:
- #5996
in functions:
- `get_authentication_headers_for_url`
- `request_etag`
- `get_from_cache`
Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588
```
FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token'
```
Related to:
- #6094 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6107/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6106/comments | https://api.github.com/repos/huggingface/datasets/issues/6106/events | https://github.com/huggingface/datasets/issues/6106 | 1,829,131,223 | I_kwDODunzps5tBlPX | 6,106 | load local json_file as dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe"
} | [] | closed | false | null | [] | null | 2 | "2023-07-31T12:53:49Z" | "2023-08-18T01:46:35Z" | "2023-08-18T01:46:35Z" | NONE | null | null | null | ### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6106/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6105/comments | https://api.github.com/repos/huggingface/datasets/issues/6105/events | https://github.com/huggingface/datasets/pull/6105 | 1,829,008,430 | PR_kwDODunzps5WyiJD | 6,105 | Fix error when loading from GCP bucket | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 5 | "2023-07-31T11:44:46Z" | "2023-08-01T10:48:52Z" | "2023-08-01T10:38:54Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6105.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6105",
"merged_at": "2023-08-01T10:38:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6105.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6105"
} | Fix `resolve_pattern` for filesystems with tuple protocol.
Fix #6100.
The bug code lines were introduced by:
- #6028 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6105/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6104/comments | https://api.github.com/repos/huggingface/datasets/issues/6104/events | https://github.com/huggingface/datasets/issues/6104 | 1,828,959,107 | I_kwDODunzps5tA7OD | 6,104 | HF Datasets data access is extremely slow even when in memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery"
} | [] | open | false | null | [] | null | 1 | "2023-07-31T11:12:19Z" | "2023-08-01T11:22:43Z" | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?
It's faster to produce the dataset from scratch than to access it from HF Datasets!
### Steps to reproduce the bug
I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).
```python
#!/usr/bin/env python3
import sys
import time
import torch
from datasets import load_dataset
def main(dataset_name):
# Start the timer
start_time = time.time()
# Load the dataset from Hugging Face Hub
dataset = load_dataset(dataset_name)
# Set the dataset format as torch
dataset.set_format(type="torch")
# Perform an identity map
dataset = dataset.map(lambda example: example, batched=True, batch_size=20)
# End the timer
end_time = time.time()
# Print the time taken
print(f"Time taken: {end_time - start_time:.2f} seconds")
if __name__ == "__main__":
dataset_name = "NightMachinery/hf_datasets_bug1"
print(f"dataset_name: {dataset_name}")
main(dataset_name)
```
### Expected behavior
_
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6104/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6103/comments | https://api.github.com/repos/huggingface/datasets/issues/6103/events | https://github.com/huggingface/datasets/pull/6103 | 1,828,515,165 | PR_kwDODunzps5Ww2gV | 6,103 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-07-31T06:44:05Z" | "2023-07-31T06:55:58Z" | "2023-07-31T06:45:41Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6103",
"merged_at": "2023-07-31T06:45:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6103"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6103/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6102/comments | https://api.github.com/repos/huggingface/datasets/issues/6102/events | https://github.com/huggingface/datasets/pull/6102 | 1,828,494,896 | PR_kwDODunzps5WwyGy | 6,102 | Release 2.14.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 4 | "2023-07-31T06:27:47Z" | "2023-07-31T06:48:09Z" | "2023-07-31T06:32:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6102.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6102",
"merged_at": "2023-07-31T06:32:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6102.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6102"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6102/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6101/comments | https://api.github.com/repos/huggingface/datasets/issues/6101/events | https://github.com/huggingface/datasets/pull/6101 | 1,828,469,648 | PR_kwDODunzps5WwspW | 6,101 | Release 2.14.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-07-31T06:05:36Z" | "2023-07-31T06:33:00Z" | "2023-07-31T06:18:17Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6101",
"merged_at": "2023-07-31T06:18:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6101"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6101/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6100/comments | https://api.github.com/repos/huggingface/datasets/issues/6100/events | https://github.com/huggingface/datasets/issues/6100 | 1,828,118,930 | I_kwDODunzps5s9uGS | 6,100 | TypeError when loading from GCP bucket | {
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 2 | "2023-07-30T23:03:00Z" | "2023-08-03T10:00:48Z" | "2023-08-01T10:38:55Z" | NONE | null | null | null | ### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_files=["gs://..."])
```
The following exception is raised:
```python
Traceback (most recent call last):
...
packages/datasets/data_files.py", line 335, in resolve_pattern
protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""
TypeError: can only concatenate tuple (not "str") to tuple
```
With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string.
### Expected behavior
The file should be loaded without exception.
### Environment info
- `datasets` version: 2.14.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6100/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6099/comments | https://api.github.com/repos/huggingface/datasets/issues/6099/events | https://github.com/huggingface/datasets/issues/6099 | 1,827,893,576 | I_kwDODunzps5s83FI | 6,099 | How do i get "amazon_us_reviews | {
"avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4",
"events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}",
"followers_url": "https://api.github.com/users/IqraBaluch/followers",
"following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}",
"gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IqraBaluch",
"id": 57810189,
"login": "IqraBaluch",
"node_id": "MDQ6VXNlcjU3ODEwMTg5",
"organizations_url": "https://api.github.com/users/IqraBaluch/orgs",
"received_events_url": "https://api.github.com/users/IqraBaluch/received_events",
"repos_url": "https://api.github.com/users/IqraBaluch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IqraBaluch"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 10 | "2023-07-30T11:02:17Z" | "2023-08-21T05:08:08Z" | "2023-08-10T05:02:35Z" | NONE | null | null | null | ### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']
Example of usage:
`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]
__________________________________________________________________________
`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')
print(amazon_us_reviews)`
**ERROR**
`Generating` train split: 0%
0/960872 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1692 )
-> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record
1694 writer.write(example, key)
11 frames
KeyError: 'marketplace'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1711 e = e.__context__
-> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1713
1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
### Motivation
The dataset I'm using
https://huggingface.co/datasets/amazon_us_reviews
### Your contribution
What is the best way to load this data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6099/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6098/comments | https://api.github.com/repos/huggingface/datasets/issues/6098/events | https://github.com/huggingface/datasets/pull/6098 | 1,827,655,071 | PR_kwDODunzps5WuCn1 | 6,098 | Expanduser in save_to_disk() | {
"avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4",
"events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}",
"followers_url": "https://api.github.com/users/Unknown3141592/followers",
"following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}",
"gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Unknown3141592",
"id": 51715864,
"login": "Unknown3141592",
"node_id": "MDQ6VXNlcjUxNzE1ODY0",
"organizations_url": "https://api.github.com/users/Unknown3141592/orgs",
"received_events_url": "https://api.github.com/users/Unknown3141592/received_events",
"repos_url": "https://api.github.com/users/Unknown3141592/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Unknown3141592"
} | [] | closed | false | null | [] | null | 3 | "2023-07-29T20:50:45Z" | "2023-10-27T14:14:11Z" | "2023-10-27T14:04:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6098",
"merged_at": "2023-10-27T14:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6098"
} | Fixes #5651. The same problem occurs when loading from disk so I fixed it there too.
I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6098/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6097/comments | https://api.github.com/repos/huggingface/datasets/issues/6097/events | https://github.com/huggingface/datasets/issues/6097 | 1,827,054,143 | I_kwDODunzps5s5qI_ | 6,097 | Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4",
"events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}",
"followers_url": "https://api.github.com/users/aschoenauer-sebag/followers",
"following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}",
"gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aschoenauer-sebag",
"id": 2538048,
"login": "aschoenauer-sebag",
"node_id": "MDQ6VXNlcjI1MzgwNDg=",
"organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs",
"received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events",
"repos_url": "https://api.github.com/users/aschoenauer-sebag/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aschoenauer-sebag"
} | [] | closed | false | null | [] | null | 1 | "2023-07-28T20:31:59Z" | "2023-07-28T20:49:58Z" | "2023-07-28T20:49:58Z" | NONE | null | null | null | ### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc.
Are you able to reproduce what I observe?
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature:
```
{'vectors': array([[random values ...]])}
```
### Expected behavior
The expected behavior happens when the `set_format` method is not called:
```python
from datasets import Dataset
import numpy as np
foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]}
foo = Dataset.from_dict(foo)
# foo.set_format('numpy', ['vectors'])
foo.add_faiss_index('vectors')
new_vector = np.random.random(1024)
scores, res = foo.get_nearest_examples('vectors', new_vector, k=3)
```
This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere
```
{'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6097/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6096/comments | https://api.github.com/repos/huggingface/datasets/issues/6096/events | https://github.com/huggingface/datasets/pull/6096 | 1,826,731,091 | PR_kwDODunzps5Wq9Hb | 6,096 | Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | null | [] | null | 2 | "2023-07-28T16:36:59Z" | "2023-09-06T13:58:09Z" | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6096",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6096"
} | Hi to whoever is reading this! 🤗 (Most likely @mariosasko)
## What's in this PR?
This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as requested at #6086.
## What's missing in this PR?
As per `to_json`, `to_csv`, and `to_parquet` docstrings for the recently included `storage_options` arg, I've scoped it to 2.15.0, so we should check that before merging in case we want to scope that for 2.14.2 instead.
Additionally, should we also add `fsspec` support for the `from_csv`, `from_json`, and `from_parquet` methods? If you want me to do so @mariosasko just let me know and I'll create another PR to support that too! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6096/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6095/comments | https://api.github.com/repos/huggingface/datasets/issues/6095/events | https://github.com/huggingface/datasets/pull/6095 | 1,826,496,967 | PR_kwDODunzps5WqJtr | 6,095 | Fix deprecation of errors in TextConfig | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-07-28T14:08:37Z" | "2023-07-31T05:26:32Z" | "2023-07-31T05:17:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6095.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6095",
"merged_at": "2023-07-31T05:17:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6095.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6095"
} | This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by:
- #5974
```python
In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-701c27131a5d> in <module>
----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict")
~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2107
2108 # Create a dataset builder
-> 2109 builder_instance = load_dataset_builder(
2110 path=path,
2111 name=name,
~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name)
1831 # Instantiate the dataset builder
-> 1832 builder_instance: DatasetBuilder = builder_cls(
1833 cache_dir=cache_dir,
1834 dataset_name=dataset_name,
~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs)
371 if data_dir is not None:
372 config_kwargs["data_dir"] = data_dir
--> 373 self.config, self.config_id = self._create_builder_config(
374 config_name=config_name,
375 custom_features=features,
~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs)
550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
551 config_kwargs["version"] = self.VERSION
--> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
553
554 # otherwise use the config_kwargs to overwrite the attributes
TypeError: __init__() got an unexpected keyword argument 'errors'
```
Similar to:
- #6094 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6095/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6094/comments | https://api.github.com/repos/huggingface/datasets/issues/6094/events | https://github.com/huggingface/datasets/pull/6094 | 1,826,293,414 | PR_kwDODunzps5WpdpA | 6,094 | Fix deprecation of use_auth_token in DownloadConfig | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2023-07-28T11:52:21Z" | "2023-07-31T05:08:41Z" | "2023-07-31T04:59:50Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6094.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6094",
"merged_at": "2023-07-31T04:59:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6094.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6094"
} | This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by:
- #5996
```python
In [1]: from datasets import DownloadConfig
In [2]: DownloadConfig(use_auth_token=False)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-41927b449e72> in <module>
----> 1 DownloadConfig(use_auth_token=False)
TypeError: __init__() got an unexpected keyword argument 'use_auth_token'
```
```python
In [1]: from datasets import get_dataset_config_names
In [2]: get_dataset_config_names("squad", use_auth_token=False)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-4671992ead50> in <module>
----> 1 get_dataset_config_names("squad", use_auth_token=False)
~/huggingface/datasets/src/datasets/inspect.py in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs)
349 ```
350 """
--> 351 dataset_module = dataset_module_factory(
352 path,
353 revision=revision,
~/huggingface/datasets/src/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1374 """
1375 if download_config is None:
-> 1376 download_config = DownloadConfig(**download_kwargs)
1377 download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
1378 download_config.extract_compressed_file = True
TypeError: __init__() got an unexpected keyword argument 'use_auth_token'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6094/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6094/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6093/comments | https://api.github.com/repos/huggingface/datasets/issues/6093/events | https://github.com/huggingface/datasets/pull/6093 | 1,826,210,490 | PR_kwDODunzps5WpLfh | 6,093 | Deprecate `download_custom` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 6 | "2023-07-28T10:49:06Z" | "2023-08-21T17:51:34Z" | "2023-07-28T11:30:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6093.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6093",
"merged_at": "2023-07-28T11:30:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6093.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6093"
} | Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead.
We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been requests to implement the streaming version of this method on the forum, but the reason for this seems to be a tip in the docs that "promotes" this method (this PR removes it).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6093/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6092/comments | https://api.github.com/repos/huggingface/datasets/issues/6092/events | https://github.com/huggingface/datasets/pull/6092 | 1,826,111,806 | PR_kwDODunzps5Wo1mh | 6,092 | Minor fix in `iter_files` for hidden files | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-07-28T09:50:12Z" | "2023-07-28T10:59:28Z" | "2023-07-28T10:50:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6092.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6092",
"merged_at": "2023-07-28T10:50:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6092.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6092"
} | Fix #6090 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6092/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6091/comments | https://api.github.com/repos/huggingface/datasets/issues/6091/events | https://github.com/huggingface/datasets/pull/6091 | 1,826,086,487 | PR_kwDODunzps5Wov9Q | 6,091 | Bump fsspec from 2021.11.1 to 2022.3.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-07-28T09:37:15Z" | "2023-07-28T10:16:11Z" | "2023-07-28T10:07:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6091.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6091",
"merged_at": "2023-07-28T10:07:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6091.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6091"
} | Fix https://github.com/huggingface/datasets/issues/6087
(Colab installs 2023.6.0, so we should be good) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6091/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6091/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6090/comments | https://api.github.com/repos/huggingface/datasets/issues/6090/events | https://github.com/huggingface/datasets/issues/6090 | 1,825,865,043 | I_kwDODunzps5s1H1T | 6,090 | FilesIterable skips all the files after a hidden file | {
"avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4",
"events_url": "https://api.github.com/users/dkrivosic/events{/privacy}",
"followers_url": "https://api.github.com/users/dkrivosic/followers",
"following_url": "https://api.github.com/users/dkrivosic/following{/other_user}",
"gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dkrivosic",
"id": 10785413,
"login": "dkrivosic",
"node_id": "MDQ6VXNlcjEwNzg1NDEz",
"organizations_url": "https://api.github.com/users/dkrivosic/orgs",
"received_events_url": "https://api.github.com/users/dkrivosic/received_events",
"repos_url": "https://api.github.com/users/dkrivosic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dkrivosic"
} | [] | closed | false | null | [] | null | 1 | "2023-07-28T07:25:57Z" | "2023-07-28T10:51:14Z" | "2023-07-28T10:50:11Z" | NONE | null | null | null | ### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8-
### Expected behavior
The script should print all the files except the hidden one.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6090/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6089/comments | https://api.github.com/repos/huggingface/datasets/issues/6089/events | https://github.com/huggingface/datasets/issues/6089 | 1,825,761,476 | I_kwDODunzps5s0ujE | 6,089 | AssertionError: daemonic processes are not allowed to have children | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1"
} | [] | open | false | null | [] | null | 2 | "2023-07-28T06:04:00Z" | "2023-07-31T02:34:02Z" | null | NONE | null | null | null | ### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download
downloaded_path_or_paths = map_nested(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested
mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map
return _map_with_multiprocessing_pool(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool
with Pool(num_proc, initargs=initargs, initializer=initializer) as pool:
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__
self._repopulate_pool()
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start
assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^
AssertionError: daemonic processes are not allowed to have children
```
The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process.
### Steps to reproduce the bug
1. start a deamon process
2. run load_dataset with num_proc > 0
### Expected behavior
No error.
### Environment info
Python 3.11.4
datasets latest master | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6089/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6088/comments | https://api.github.com/repos/huggingface/datasets/issues/6088/events | https://github.com/huggingface/datasets/issues/6088 | 1,825,665,235 | I_kwDODunzps5s0XDT | 6,088 | Loading local data files initiates web requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4",
"events_url": "https://api.github.com/users/lytning98/events{/privacy}",
"followers_url": "https://api.github.com/users/lytning98/followers",
"following_url": "https://api.github.com/users/lytning98/following{/other_user}",
"gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lytning98",
"id": 23375707,
"login": "lytning98",
"node_id": "MDQ6VXNlcjIzMzc1NzA3",
"organizations_url": "https://api.github.com/users/lytning98/orgs",
"received_events_url": "https://api.github.com/users/lytning98/received_events",
"repos_url": "https://api.github.com/users/lytning98/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lytning98/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lytning98"
} | [] | closed | false | null | [] | null | 0 | "2023-07-28T04:06:26Z" | "2023-07-28T05:02:22Z" | "2023-07-28T05:02:22Z" | NONE | null | null | null | As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
```
But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows
```
in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode)
940 self.download_config = download_config
941 self.download_mode = download_mode
--> 942 increase_load_count(name, resource_type="dataset")
```
I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6088/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6087/comments | https://api.github.com/repos/huggingface/datasets/issues/6087/events | https://github.com/huggingface/datasets/issues/6087 | 1,825,133,741 | I_kwDODunzps5syVSt | 6,087 | fsspec dependency is set too low | {
"avatar_url": "https://avatars.githubusercontent.com/u/1085885?v=4",
"events_url": "https://api.github.com/users/iXce/events{/privacy}",
"followers_url": "https://api.github.com/users/iXce/followers",
"following_url": "https://api.github.com/users/iXce/following{/other_user}",
"gists_url": "https://api.github.com/users/iXce/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iXce",
"id": 1085885,
"login": "iXce",
"node_id": "MDQ6VXNlcjEwODU4ODU=",
"organizations_url": "https://api.github.com/users/iXce/orgs",
"received_events_url": "https://api.github.com/users/iXce/received_events",
"repos_url": "https://api.github.com/users/iXce/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iXce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iXce/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iXce"
} | [] | closed | false | null | [] | null | 1 | "2023-07-27T20:08:22Z" | "2023-07-28T10:07:56Z" | "2023-07-28T10:07:03Z" | NONE | null | null | null | ### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129
### Steps to reproduce the bug
1. Install fsspec==2021.11.1
2. Install latest datasets==2.14.1
3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback`
### Expected behavior
No import issue
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6087/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6086/comments | https://api.github.com/repos/huggingface/datasets/issues/6086/events | https://github.com/huggingface/datasets/issues/6086 | 1,825,009,268 | I_kwDODunzps5sx250 | 6,086 | Support `fsspec` in `Dataset.to_<format>` methods | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
] | null | 4 | "2023-07-27T19:08:37Z" | "2023-07-28T15:28:26Z" | null | CONTRIBUTOR | null | null | null | Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6086/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6085/comments | https://api.github.com/repos/huggingface/datasets/issues/6085/events | https://github.com/huggingface/datasets/pull/6085 | 1,824,985,188 | PR_kwDODunzps5WlAyA | 6,085 | Fix `fsspec` download | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | 3 | "2023-07-27T18:54:47Z" | "2023-07-27T19:06:13Z" | null | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6085.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6085",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6085.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6085"
} | Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ...
TODO: fix
```
self.session = aiobotocore.session.AioSession(**self.kwargs)
TypeError: __init__() got an unexpected keyword argument 'hf'
```
by "preparing `storage_options`" for the `fsspec` head/get | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6085/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6084/comments | https://api.github.com/repos/huggingface/datasets/issues/6084/events | https://github.com/huggingface/datasets/issues/6084 | 1,824,896,761 | I_kwDODunzps5sxbb5 | 6,084 | Changing pixel values of images in the Winoground dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/90359895?v=4",
"events_url": "https://api.github.com/users/ZitengWangNYU/events{/privacy}",
"followers_url": "https://api.github.com/users/ZitengWangNYU/followers",
"following_url": "https://api.github.com/users/ZitengWangNYU/following{/other_user}",
"gists_url": "https://api.github.com/users/ZitengWangNYU/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZitengWangNYU",
"id": 90359895,
"login": "ZitengWangNYU",
"node_id": "MDQ6VXNlcjkwMzU5ODk1",
"organizations_url": "https://api.github.com/users/ZitengWangNYU/orgs",
"received_events_url": "https://api.github.com/users/ZitengWangNYU/received_events",
"repos_url": "https://api.github.com/users/ZitengWangNYU/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZitengWangNYU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZitengWangNYU/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZitengWangNYU"
} | [] | open | false | null | [] | null | 0 | "2023-07-27T17:55:35Z" | "2023-07-27T17:55:35Z" | null | NONE | null | null | null | Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slightly different.
I thought it was due to the package version difference, but today's morning I found out that my winoground dataset in colab became the same with the one in my hpc environment. The dataset in colab can produce the correct result but now it is gone as well.
Can you help me with this? What causes the datasets to have the wrong pixel values? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6084/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6083/comments | https://api.github.com/repos/huggingface/datasets/issues/6083/events | https://github.com/huggingface/datasets/pull/6083 | 1,824,832,348 | PR_kwDODunzps5WkgAI | 6,083 | set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-27T17:10:41Z" | "2023-07-27T17:22:05Z" | "2023-07-27T17:11:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6083",
"merged_at": "2023-07-27T17:11:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6083"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6083/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6082/comments | https://api.github.com/repos/huggingface/datasets/issues/6082/events | https://github.com/huggingface/datasets/pull/6082 | 1,824,819,672 | PR_kwDODunzps5WkdIn | 6,082 | Release: 2.14.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 6 | "2023-07-27T17:05:54Z" | "2023-07-31T06:32:16Z" | "2023-07-27T17:08:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6082",
"merged_at": "2023-07-27T17:08:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6082"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6082/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6081/comments | https://api.github.com/repos/huggingface/datasets/issues/6081/events | https://github.com/huggingface/datasets/pull/6081 | 1,824,486,278 | PR_kwDODunzps5WjU0k | 6,081 | Deprecate `Dataset.export` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2023-07-27T14:22:18Z" | "2023-07-28T11:09:54Z" | "2023-07-28T11:01:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6081",
"merged_at": "2023-07-28T11:01:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6081"
} | Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) tutorial (on which this method is based) to write TFRecord files instead. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6081/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6080/comments | https://api.github.com/repos/huggingface/datasets/issues/6080/events | https://github.com/huggingface/datasets/pull/6080 | 1,822,667,554 | PR_kwDODunzps5WdL4K | 6,080 | Remove README link to deprecated Colab notebook | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-07-26T15:27:49Z" | "2023-07-26T16:24:43Z" | "2023-07-26T16:14:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6080",
"merged_at": "2023-07-26T16:14:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6080"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6080/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6079/comments | https://api.github.com/repos/huggingface/datasets/issues/6079/events | https://github.com/huggingface/datasets/issues/6079 | 1,822,597,471 | I_kwDODunzps5soqFf | 6,079 | Iterating over DataLoader based on HF datasets is stuck forever | {
"avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4",
"events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}",
"followers_url": "https://api.github.com/users/arindamsarkar93/followers",
"following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}",
"gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arindamsarkar93",
"id": 5454868,
"login": "arindamsarkar93",
"node_id": "MDQ6VXNlcjU0NTQ4Njg=",
"organizations_url": "https://api.github.com/users/arindamsarkar93/orgs",
"received_events_url": "https://api.github.com/users/arindamsarkar93/received_events",
"repos_url": "https://api.github.com/users/arindamsarkar93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arindamsarkar93"
} | [] | closed | false | null | [] | null | 15 | "2023-07-26T14:52:37Z" | "2024-02-07T17:46:52Z" | "2023-07-30T14:09:06Z" | NONE | null | null | null | ### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here?
### Steps to reproduce the bug
```
train_dataset = load_dataset(
"parquet", data_files = {'train': tr_data_path + '*.parquet'},
split = 'train',
collate_fn = streaming_data_collate_fn,
streaming = True
).with_format('torch')
train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0)
t = time.time()
iter_ = 0
for batch in train_dataloader:
iter_ += 1
if iter_ == 1000:
break
print (time.time() - t)
```
### Expected behavior
The snippet should work normally and load the next batch of data.
### Environment info
datasets: '2.14.0'
pyarrow: '12.0.0'
torch: '2.0.0'
Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0]
!uname -r
5.10.178-162.673.amzn2.x86_64 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6079/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6078/comments | https://api.github.com/repos/huggingface/datasets/issues/6078/events | https://github.com/huggingface/datasets/issues/6078 | 1,822,501,472 | I_kwDODunzps5soSpg | 6,078 | resume_download with streaming=True | {
"avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4",
"events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}",
"followers_url": "https://api.github.com/users/NicolasMICAUX/followers",
"following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}",
"gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NicolasMICAUX",
"id": 72763959,
"login": "NicolasMICAUX",
"node_id": "MDQ6VXNlcjcyNzYzOTU5",
"organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs",
"received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events",
"repos_url": "https://api.github.com/users/NicolasMICAUX/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NicolasMICAUX"
} | [] | closed | false | null | [] | null | 3 | "2023-07-26T14:08:22Z" | "2023-07-28T11:05:03Z" | "2023-07-28T11:05:03Z" | NONE | null | null | null | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset?
`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.
### Steps to reproduce the bug
```
from datasets import load_dataset, DownloadConfig
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True, # optional
split="train",
download_config=DownloadConfig(resume_download=True)
)
# interupt the run and try to relaunch it => this restart from scratch
```
### Expected behavior
I would expect a parameter to start streaming from a given index in the dataset.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6078/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6077/comments | https://api.github.com/repos/huggingface/datasets/issues/6077/events | https://github.com/huggingface/datasets/issues/6077 | 1,822,486,810 | I_kwDODunzps5soPEa | 6,077 | Mapping gets stuck at 99% | {
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916"
} | [] | open | false | null | [] | null | 4 | "2023-07-26T14:00:40Z" | "2023-07-28T09:21:07Z" | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Hi !
I'm currently working with a large (~150GB) unnormalized dataset at work.
The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it.
I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset.
The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why.
Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me.
### Steps to reproduce the bug
I'm able to reproduce the problem using the following scripts:
```python
# random_data.py
import datasets
import torch
_VERSION = "1.0.0"
class RandomDataset(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo(
version=_VERSION,
supervised_keys=None,
features=datasets.Features(
{
"positions": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"normals": datasets.Array2D(
shape=(30000, 3),
dtype="float32",
),
"features": datasets.Array2D(
shape=(30000, 6),
dtype="float32",
),
"scalars": datasets.Sequence(
feature=datasets.Value("float32"),
length=20,
),
},
),
)
def _split_generators(self, dl_manager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, # type: ignore
gen_kwargs={"nb_samples": 1000},
),
datasets.SplitGenerator(
name=datasets.Split.TEST, # type: ignore
gen_kwargs={"nb_samples": 100},
),
]
def _generate_examples(self, nb_samples: int):
for idx in range(nb_samples):
yield idx, {
"positions": torch.randn(30000, 3),
"normals": torch.randn(30000, 3),
"features": torch.randn(30000, 6),
"scalars": torch.randn(20),
}
```
```python
# main.py
import datasets
import torch
def apply_mean_std(
dataset: datasets.Dataset,
means: dict[str, torch.Tensor],
stds: dict[str, torch.Tensor],
) -> dict[str, torch.Tensor]:
"""Normalize the dataset using the mean and standard deviation of each feature.
Args:
dataset (`Dataset`): A huggingface dataset.
mean (`dict[str, Tensor]`): A dictionary containing the mean of each feature.
std (`dict[str, Tensor]`): A dictionary containing the standard deviation of each feature.
Returns:
dict: A dictionary containing the normalized dataset.
"""
result = {}
for key in means.keys():
# extract data from dataset
data: torch.Tensor = dataset[key] # type: ignore
# extract mean and std from dict
mean = means[key] # type: ignore
std = stds[key] # type: ignore
# normalize data
normalized_data = (data - mean) / std
result[key] = normalized_data
return result
# get dataset
ds = datasets.load_dataset(
path="random_data.py",
split="train",
).with_format("torch")
# compute mean (along last axis)
means = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
means_sq = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names}
for batch in ds.iter(batch_size=8):
for key in ds.column_names:
data = batch[key]
batch_size = data.shape[0]
data = data.reshape(-1, data.shape[-1])
means[key] += data.mean(dim=0) / len(ds) * batch_size
means_sq[key] += (data**2).mean(dim=0) / len(ds) * batch_size
# compute std (along last axis)
stds = {key: torch.sqrt(means_sq[key] - means[key] ** 2) for key in ds.column_names}
# normalize each feature of the dataset
ds_normalized = ds.map(
desc="Applying mean/std", # type: ignore
function=apply_mean_std,
batched=False,
fn_kwargs={
"means": means,
"stds": stds,
},
)
```
### Expected behavior
Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6077/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6076/comments | https://api.github.com/repos/huggingface/datasets/issues/6076/events | https://github.com/huggingface/datasets/pull/6076 | 1,822,345,597 | PR_kwDODunzps5WcGVR | 6,076 | No gzip encoding from github | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-26T12:46:07Z" | "2023-07-27T16:15:11Z" | "2023-07-27T16:14:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6076",
"merged_at": "2023-07-27T16:14:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6076"
} | Don't accept gzip encoding from github, otherwise some files are not streamable + seekable.
fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84
and making sure https://github.com/huggingface/datasets/issues/2918 works as well | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6076/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6075/comments | https://api.github.com/repos/huggingface/datasets/issues/6075/events | https://github.com/huggingface/datasets/issues/6075 | 1,822,341,398 | I_kwDODunzps5snrkW | 6,075 | Error loading music files using `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/susnato",
"id": 56069179,
"login": "susnato",
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"repos_url": "https://api.github.com/users/susnato/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"type": "User",
"url": "https://api.github.com/users/susnato"
} | [] | closed | false | null | [] | null | 2 | "2023-07-26T12:44:05Z" | "2023-07-26T13:08:08Z" | "2023-07-26T13:08:08Z" | NONE | null | null | null | ### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem
formatted_output = format_table(
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__
return self.format_column(pa_table)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read
with SoundFile(file, 'r', samplerate, channels,
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised.
```
### Steps to reproduce the bug
Code to reproduce the error -
```python
from datasets import load_dataset
ds = load_dataset("susnato/pop2piano_real_music_test", split="test")
print(ds[0])
```
### Expected behavior
I should be able to read the music file without any error.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6075/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6074/comments | https://api.github.com/repos/huggingface/datasets/issues/6074/events | https://github.com/huggingface/datasets/pull/6074 | 1,822,299,128 | PR_kwDODunzps5Wb8O_ | 6,074 | Misc doc improvements | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-07-26T12:20:54Z" | "2023-07-27T16:16:28Z" | "2023-07-27T16:16:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6074",
"merged_at": "2023-07-27T16:16:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6074"
} | Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6074/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6073/comments | https://api.github.com/repos/huggingface/datasets/issues/6073/events | https://github.com/huggingface/datasets/issues/6073 | 1,822,167,804 | I_kwDODunzps5snBL8 | 6,073 | version2.3.2 load_dataset()data_files can't include .xxxx in path | {
"avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4",
"events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}",
"followers_url": "https://api.github.com/users/BUAAChuanWang/followers",
"following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}",
"gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BUAAChuanWang",
"id": 45893496,
"login": "BUAAChuanWang",
"node_id": "MDQ6VXNlcjQ1ODkzNDk2",
"organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs",
"received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events",
"repos_url": "https://api.github.com/users/BUAAChuanWang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BUAAChuanWang"
} | [] | closed | false | null | [] | null | 1 | "2023-07-26T11:09:31Z" | "2023-08-29T15:53:59Z" | "2023-08-29T15:53:59Z" | NONE | null | null | null | ### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2
So there maybe a bug in path join.
Here is the whole bug report:
/x/datasets/loa │
│ d.py:1656 in load_dataset │
│ │
│ 1653 │ ignore_verifications = ignore_verifications or save_infos │
│ 1654 │ │
│ 1655 │ # Create a dataset builder │
│ ❱ 1656 │ builder_instance = load_dataset_builder( │
│ 1657 │ │ path=path, │
│ 1658 │ │ name=name, │
│ 1659 │ │ data_dir=data_dir, │
│ │
│ x/datasets/loa │
│ d.py:1439 in load_dataset_builder │
│ │
│ 1436 │ if use_auth_token is not None: │
│ 1437 │ │ download_config = download_config.copy() if download_config e │
│ 1438 │ │ download_config.use_auth_token = use_auth_token │
│ ❱ 1439 │ dataset_module = dataset_module_factory( │
│ 1440 │ │ path, │
│ 1441 │ │ revision=revision, │
│ 1442 │ │ download_config=download_config, │
│ │
│ x/datasets/loa │
│ d.py:1097 in dataset_module_factory │
│ │
│ 1094 │ │
│ 1095 │ # Try packaged │
│ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │
│ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │
│ 1098 │ │ │ path, │
│ 1099 │ │ │ data_dir=data_dir, │
│ 1100 │ │ │ data_files=data_files, │
│ │
│x/datasets/loa │
│ d.py:743 in get_module │
│ │
│ 740 │ │ │ if self.data_dir is not None │
│ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │
│ 742 │ │ ) │
│ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │
│ 744 │ │ │ patterns, │
│ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │
│ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │
│ │
│ x/datasets/dat │
│ a_files.py:590 in from_local_or_remote │
│ │
│ 587 │ │ out = cls() │
│ 588 │ │ for key, patterns_for_key in patterns.items(): │
│ 589 │ │ │ out[key] = ( │
│ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │
│ 591 │ │ │ │ │ patterns_for_key, │
│ 592 │ │ │ │ │ base_path=base_path, │
│ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │
│ │
│ /x/datasets/dat │
│ a_files.py:558 in from_local_or_remote │
│ │
│ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │
│ 556 │ ) -> "DataFilesList": │
│ 557 │ │ base_path = base_path if base_path is not None else str(Path() │
│ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │
│ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │
│ 560 │ │ return cls(data_files, origin_metadata) │
│ 561 │
│ │
│ /x/datasets/dat │
│ a_files.py:195 in resolve_patterns_locally_or_by_urls │
│ │
│ 192 │ │ if is_remote_url(pattern): │
│ 193 │ │ │ data_files.append(Url(pattern)) │
│ 194 │ │ else: │
│ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │
│ 196 │ │ │ │ data_files.append(path) │
│ 197 │ │
│ 198 │ if not data_files: │
│ │
│ /x/datasets/dat │
│ a_files.py:145 in _resolve_single_pattern_locally │
│ │
│ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │
│ 143 │ │ if allowed_extensions is not None: │
│ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │
│ ❱ 145 │ │ raise FileNotFoundError(error_msg) │
│ 146 │ return sorted(out) │
│ 147
### Steps to reproduce the bug
1. Version=2.3.2
2. In shell, cd workdir.(cd /a/b/c/.d/)
3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
### Expected behavior
fix it please~
### Environment info
2.3.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6073/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6072/comments | https://api.github.com/repos/huggingface/datasets/issues/6072/events | https://github.com/huggingface/datasets/pull/6072 | 1,822,123,560 | PR_kwDODunzps5WbWFN | 6,072 | Fix fsspec storage_options from load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 6 | "2023-07-26T10:44:23Z" | "2023-07-27T12:51:51Z" | "2023-07-27T12:42:57Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6072",
"merged_at": "2023-07-27T12:42:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6072"
} | close https://github.com/huggingface/datasets/issues/6071 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6072/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6071/comments | https://api.github.com/repos/huggingface/datasets/issues/6071/events | https://github.com/huggingface/datasets/issues/6071 | 1,821,990,749 | I_kwDODunzps5smV9d | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello"
} | [] | closed | false | null | [] | null | 2 | "2023-07-26T09:37:20Z" | "2023-07-27T12:42:58Z" | "2023-07-27T12:42:58Z" | NONE | null | null | null | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6071/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6070/comments | https://api.github.com/repos/huggingface/datasets/issues/6070/events | https://github.com/huggingface/datasets/pull/6070 | 1,820,836,330 | PR_kwDODunzps5WXDLc | 6,070 | Fix Quickstart notebook link | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 3 | "2023-07-25T17:48:37Z" | "2023-07-25T18:19:01Z" | "2023-07-25T18:10:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6070.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6070",
"merged_at": "2023-07-25T18:10:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6070.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6070"
} | Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6070/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6069/comments | https://api.github.com/repos/huggingface/datasets/issues/6069/events | https://github.com/huggingface/datasets/issues/6069 | 1,820,831,535 | I_kwDODunzps5sh68v | 6,069 | KeyError: dataset has no key "image" | {
"avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4",
"events_url": "https://api.github.com/users/etetteh/events{/privacy}",
"followers_url": "https://api.github.com/users/etetteh/followers",
"following_url": "https://api.github.com/users/etetteh/following{/other_user}",
"gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/etetteh",
"id": 28512232,
"login": "etetteh",
"node_id": "MDQ6VXNlcjI4NTEyMjMy",
"organizations_url": "https://api.github.com/users/etetteh/orgs",
"received_events_url": "https://api.github.com/users/etetteh/received_events",
"repos_url": "https://api.github.com/users/etetteh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/etetteh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/etetteh"
} | [] | closed | false | null | [] | null | 6 | "2023-07-25T17:45:50Z" | "2023-07-27T12:42:17Z" | "2023-07-27T12:42:17Z" | NONE | null | null | null | ### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function.
For some reason, the images are not in the example batches.
### Steps to reproduce the bug
I'm using the latest stable version of datasets
### Expected behavior
I expect the example_batches to contain both images and labels
### Environment info
I'm using the latest stable version of datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6069/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6068/comments | https://api.github.com/repos/huggingface/datasets/issues/6068/events | https://github.com/huggingface/datasets/pull/6068 | 1,820,106,952 | PR_kwDODunzps5WUkZi | 6,068 | fix tqdm lock deletion | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 5 | "2023-07-25T11:17:25Z" | "2023-07-25T15:29:39Z" | "2023-07-25T15:17:50Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6068",
"merged_at": "2023-07-25T15:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6068"
} | related to https://github.com/huggingface/datasets/issues/6066 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6068/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6067/comments | https://api.github.com/repos/huggingface/datasets/issues/6067/events | https://github.com/huggingface/datasets/pull/6067 | 1,819,919,025 | PR_kwDODunzps5WT7EQ | 6,067 | fix tqdm lock | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-25T09:32:16Z" | "2023-07-25T10:02:43Z" | "2023-07-25T09:54:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6067",
"merged_at": "2023-07-25T09:54:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6067"
} | close https://github.com/huggingface/datasets/issues/6066 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6067/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6066/comments | https://api.github.com/repos/huggingface/datasets/issues/6066/events | https://github.com/huggingface/datasets/issues/6066 | 1,819,717,542 | I_kwDODunzps5sdq-m | 6,066 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1"
} | [] | closed | false | null | [] | null | 7 | "2023-07-25T07:24:36Z" | "2023-07-26T10:56:25Z" | "2023-07-26T10:56:24Z" | NONE | null | null | null | ### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6066/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6065/comments | https://api.github.com/repos/huggingface/datasets/issues/6065/events | https://github.com/huggingface/datasets/pull/6065 | 1,819,334,932 | PR_kwDODunzps5WR8jI | 6,065 | Add column type guessing from map return function | {
"avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4",
"events_url": "https://api.github.com/users/piercefreeman/events{/privacy}",
"followers_url": "https://api.github.com/users/piercefreeman/followers",
"following_url": "https://api.github.com/users/piercefreeman/following{/other_user}",
"gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piercefreeman",
"id": 1712066,
"login": "piercefreeman",
"node_id": "MDQ6VXNlcjE3MTIwNjY=",
"organizations_url": "https://api.github.com/users/piercefreeman/orgs",
"received_events_url": "https://api.github.com/users/piercefreeman/received_events",
"repos_url": "https://api.github.com/users/piercefreeman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piercefreeman"
} | [] | closed | false | null | [] | null | 5 | "2023-07-25T00:34:17Z" | "2023-07-26T15:13:45Z" | "2023-07-26T15:13:44Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6065.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6065",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6065.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6065"
} | As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset.
This PR provides an alternative approach, which is functionally equivalent to specifying features but a bit cleaner within a larger mapping pipeline. It allows clients to typehint the return variable coming from the mapper function - if we find one of these type annotations specified, and no explicit features have been passed in, we'll try to convert it into a Features map. If the map function runs and casting is unable to succeed, it will raise a DatasetTransformationNotAllowedError that indicates the typehint may be to blame. It works for batched and non-batched mapping functions.
Currently supported column types:
- builtins primitives: string, int, float, bool
- dictionaries, lists (nested and one-deep)
- Optional types and None-Unions (synonymous with optional types)
It's used like:
```python
class DatasetTyped(TypedDict):
texts: list[str]
def dataset_typed_map(batch) -> DatasetTyped:
return {"texts": [text.split() for text in batch["raw_text"]]}
dataset = {"raw_text": ["", "This is a test", "This is another test"]}
with Dataset.from_dict(dataset) as dset:
new_dataset = dset.map(
dataset_typed_map,
batched=True,
batch_size=1,
num_proc=1,
)
```
Open questions:
- Should logging indicate we have automatically guessed these types? Or proceed quietly until we hit an error (as is the current implementation). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6065/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6064/comments | https://api.github.com/repos/huggingface/datasets/issues/6064/events | https://github.com/huggingface/datasets/pull/6064 | 1,818,703,725 | PR_kwDODunzps5WPzAv | 6,064 | set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-24T15:56:00Z" | "2023-07-24T16:05:19Z" | "2023-07-24T15:56:10Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6064",
"merged_at": "2023-07-24T15:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6064"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6064/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6063/comments | https://api.github.com/repos/huggingface/datasets/issues/6063/events | https://github.com/huggingface/datasets/pull/6063 | 1,818,679,485 | PR_kwDODunzps5WPtxi | 6,063 | Release: 2.14.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 4 | "2023-07-24T15:41:19Z" | "2023-07-24T16:05:16Z" | "2023-07-24T15:47:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6063",
"merged_at": "2023-07-24T15:47:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6063"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6063/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6062/comments | https://api.github.com/repos/huggingface/datasets/issues/6062/events | https://github.com/huggingface/datasets/pull/6062 | 1,818,341,584 | PR_kwDODunzps5WOj62 | 6,062 | Improve `Dataset.from_list` docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 4 | "2023-07-24T12:36:38Z" | "2023-07-24T14:43:48Z" | "2023-07-24T14:34:43Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6062",
"merged_at": "2023-07-24T14:34:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6062"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6062/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6062/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6061/comments | https://api.github.com/repos/huggingface/datasets/issues/6061/events | https://github.com/huggingface/datasets/pull/6061 | 1,818,337,136 | PR_kwDODunzps5WOi79 | 6,061 | Dill 3.7 support | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 5 | "2023-07-24T12:33:58Z" | "2023-07-24T14:13:20Z" | "2023-07-24T14:04:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6061.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6061",
"merged_at": "2023-07-24T14:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6061.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6061"
} | Adds support for dill 3.7. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6061/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6060/comments | https://api.github.com/repos/huggingface/datasets/issues/6060/events | https://github.com/huggingface/datasets/issues/6060 | 1,816,614,120 | I_kwDODunzps5sR1To | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4",
"events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}",
"followers_url": "https://api.github.com/users/wanghaoyucn/followers",
"following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanghaoyucn",
"id": 39429965,
"login": "wanghaoyucn",
"node_id": "MDQ6VXNlcjM5NDI5OTY1",
"organizations_url": "https://api.github.com/users/wanghaoyucn/orgs",
"received_events_url": "https://api.github.com/users/wanghaoyucn/received_events",
"repos_url": "https://api.github.com/users/wanghaoyucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanghaoyucn"
} | [] | closed | false | null | [] | null | 4 | "2023-07-22T05:06:43Z" | "2024-01-22T18:35:12Z" | "2024-01-22T18:35:12Z" | NONE | null | null | null | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same.
And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither.
I have tried to use `rank` and `local_rank` to check, they all didn't make sense.
### Steps to reproduce the bug
use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run
This is my code:
```python
if args.distributed and world_size > 1:
if args.local_rank > 0:
print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True)
torch.distributed.barrier()
print("Mapping dataset")
dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys")
dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift")
dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys")
if args.local_rank == 0:
print("Mapping finished, loading results from main process")
torch.distributed.barrier()
```
### Expected behavior
Only the main process will execute `map`, while the sub process will load cache from disk.
### Environment info
server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090
- `python==3.9.16`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `22.04.1-Ubuntu`
server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090
- `python==3.9.0`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `Ubuntu 20.04` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6060/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6059/comments | https://api.github.com/repos/huggingface/datasets/issues/6059/events | https://github.com/huggingface/datasets/issues/6059 | 1,816,537,176 | I_kwDODunzps5sRihY | 6,059 | Provide ability to load label mappings from file | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | "2023-07-22T02:04:19Z" | "2023-07-22T02:04:19Z" | null | NONE | null | null | null | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file.
It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo`
I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred.
The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it.
```
class TestDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["label_1", "label_2"]),
}
),
task_templates=[TextClassification(text_column="text", label_column="label")],
)
def _split_generators(self, dl_manager):
train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
]
def _generate_examples(self, filepath):
"""Generate AG News examples."""
with open(filepath, encoding="utf-8") as csv_file:
csv_reader = csv.DictReader(csv_file)
for id_, row in enumerate(csv_reader):
yield id_, row
```
### Motivation
Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset.
### Your contribution
I'm willing to work on a PR with guidence. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6059/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6058/comments | https://api.github.com/repos/huggingface/datasets/issues/6058/events | https://github.com/huggingface/datasets/issues/6058 | 1,815,131,397 | I_kwDODunzps5sMLUF | 6,058 | laion-coco download error | {
"avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4",
"events_url": "https://api.github.com/users/yangyijune/events{/privacy}",
"followers_url": "https://api.github.com/users/yangyijune/followers",
"following_url": "https://api.github.com/users/yangyijune/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangyijune",
"id": 54424110,
"login": "yangyijune",
"node_id": "MDQ6VXNlcjU0NDI0MTEw",
"organizations_url": "https://api.github.com/users/yangyijune/orgs",
"received_events_url": "https://api.github.com/users/yangyijune/received_events",
"repos_url": "https://api.github.com/users/yangyijune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangyijune"
} | [] | closed | false | null | [] | null | 1 | "2023-07-21T04:24:15Z" | "2023-07-22T01:42:06Z" | "2023-07-22T01:42:06Z" | NONE | null | null | null | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|█| 1/1 [04:59<00:00, 2
Extracting data files: 100%|█| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|██████████████| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6058/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6057/comments | https://api.github.com/repos/huggingface/datasets/issues/6057/events | https://github.com/huggingface/datasets/issues/6057 | 1,815,100,151 | I_kwDODunzps5sMDr3 | 6,057 | Why is the speed difference of gen example so big? | {
"avatar_url": "https://avatars.githubusercontent.com/u/46072190?v=4",
"events_url": "https://api.github.com/users/pixeli99/events{/privacy}",
"followers_url": "https://api.github.com/users/pixeli99/followers",
"following_url": "https://api.github.com/users/pixeli99/following{/other_user}",
"gists_url": "https://api.github.com/users/pixeli99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pixeli99",
"id": 46072190,
"login": "pixeli99",
"node_id": "MDQ6VXNlcjQ2MDcyMTkw",
"organizations_url": "https://api.github.com/users/pixeli99/orgs",
"received_events_url": "https://api.github.com/users/pixeli99/received_events",
"repos_url": "https://api.github.com/users/pixeli99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pixeli99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pixeli99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pixeli99"
} | [] | closed | false | null | [] | null | 1 | "2023-07-21T03:34:49Z" | "2023-10-04T18:06:16Z" | "2023-10-04T18:06:15Z" | NONE | null | null | null | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('text_content')
image_data = open(image_path, "rb").read()
yield idx, {
"text": text_content,
"image": {
"path": image_path,
"bytes": image_data,
},
"conditioning_image": {
"path": image_path,
"bytes": image_data,
},
}
```
Hello,
I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**

I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6057/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6056/comments | https://api.github.com/repos/huggingface/datasets/issues/6056/events | https://github.com/huggingface/datasets/pull/6056 | 1,815,086,963 | PR_kwDODunzps5WD4RY | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AntreasAntoniou",
"id": 10792502,
"login": "AntreasAntoniou",
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AntreasAntoniou"
} | [] | open | false | null | [] | null | 6 | "2023-07-21T03:13:21Z" | "2023-08-17T08:26:53Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6056",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6056"
} | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get the push_to_hub function to retrieve on demand past history of uploads and continue mapping and uploading from where it was left off. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6056/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6055/comments | https://api.github.com/repos/huggingface/datasets/issues/6055/events | https://github.com/huggingface/datasets/issues/6055 | 1,813,524,145 | I_kwDODunzps5sGC6x | 6,055 | Fix host URL in The Pile datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4",
"events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}",
"followers_url": "https://api.github.com/users/nickovchinnikov/followers",
"following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nickovchinnikov",
"id": 7540752,
"login": "nickovchinnikov",
"node_id": "MDQ6VXNlcjc1NDA3NTI=",
"organizations_url": "https://api.github.com/users/nickovchinnikov/orgs",
"received_events_url": "https://api.github.com/users/nickovchinnikov/received_events",
"repos_url": "https://api.github.com/users/nickovchinnikov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nickovchinnikov"
} | [] | open | false | null | [] | null | 0 | "2023-07-20T09:08:52Z" | "2023-07-20T09:09:37Z" | null | NONE | null | null | null | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
| {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6055/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6054/comments | https://api.github.com/repos/huggingface/datasets/issues/6054/events | https://github.com/huggingface/datasets/issues/6054 | 1,813,271,304 | I_kwDODunzps5sFFMI | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4",
"events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}",
"followers_url": "https://api.github.com/users/ShinoharaHare/followers",
"following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}",
"gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ShinoharaHare",
"id": 47121592,
"login": "ShinoharaHare",
"node_id": "MDQ6VXNlcjQ3MTIxNTky",
"organizations_url": "https://api.github.com/users/ShinoharaHare/orgs",
"received_events_url": "https://api.github.com/users/ShinoharaHare/received_events",
"repos_url": "https://api.github.com/users/ShinoharaHare/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ShinoharaHare"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | 1 | "2023-07-20T06:36:14Z" | "2023-07-21T15:19:37Z" | "2023-07-21T15:19:37Z" | NONE | null | null | null | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6054/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6053/comments | https://api.github.com/repos/huggingface/datasets/issues/6053/events | https://github.com/huggingface/datasets/issues/6053 | 1,812,635,902 | I_kwDODunzps5sCqD- | 6,053 | Change package name from "datasets" to something less generic | {
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"events_url": "https://api.github.com/users/geajack/events{/privacy}",
"followers_url": "https://api.github.com/users/geajack/followers",
"following_url": "https://api.github.com/users/geajack/following{/other_user}",
"gists_url": "https://api.github.com/users/geajack/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/geajack",
"id": 2124157,
"login": "geajack",
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"organizations_url": "https://api.github.com/users/geajack/orgs",
"received_events_url": "https://api.github.com/users/geajack/received_events",
"repos_url": "https://api.github.com/users/geajack/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geajack/subscriptions",
"type": "User",
"url": "https://api.github.com/users/geajack"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | "2023-07-19T19:53:28Z" | "2023-10-03T16:04:09Z" | "2023-10-03T16:04:09Z" | NONE | null | null | null | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.
My preference would be a pattern like what you get with all the other big libraries like numpy or pandas:
```
import huggingface as hf
# hf.transformers, hf.datasets, hf.evaluate
```
or things like
```
import huggingface.transformers as tf
# tf.load_model(), etc
```
If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.
I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.
Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name".
Sister issues:
- [transformers](https://github.com/huggingface/transformers/issues/24934)
- **datasets**
- [evaluate](https://github.com/huggingface/evaluate/issues/476)
### Motivation
Not taking up package names the user is likely to want to use.
### Your contribution
No - more a matter of internal discussion among core library authors. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6053/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/6052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6052/comments | https://api.github.com/repos/huggingface/datasets/issues/6052/events | https://github.com/huggingface/datasets/pull/6052 | 1,812,145,100 | PR_kwDODunzps5V5yOi | 6,052 | Remove `HfFileSystem` and deprecate `S3FileSystem` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 10 | "2023-07-19T15:00:01Z" | "2023-07-19T17:39:11Z" | "2023-07-19T17:27:17Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6052.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6052",
"merged_at": "2023-07-19T17:27:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6052.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6052"
} | Remove the legacy `HfFileSystem` and deprecate `S3FileSystem`
cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6052/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6052/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6051/comments | https://api.github.com/repos/huggingface/datasets/issues/6051/events | https://github.com/huggingface/datasets/issues/6051 | 1,811,549,650 | I_kwDODunzps5r-g3S | 6,051 | Skipping shard in the remote repo and resume upload | {
"avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4",
"events_url": "https://api.github.com/users/rs9000/events{/privacy}",
"followers_url": "https://api.github.com/users/rs9000/followers",
"following_url": "https://api.github.com/users/rs9000/following{/other_user}",
"gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rs9000",
"id": 9029817,
"login": "rs9000",
"node_id": "MDQ6VXNlcjkwMjk4MTc=",
"organizations_url": "https://api.github.com/users/rs9000/orgs",
"received_events_url": "https://api.github.com/users/rs9000/received_events",
"repos_url": "https://api.github.com/users/rs9000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rs9000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rs9000"
} | [] | closed | false | null | [] | null | 2 | "2023-07-19T09:25:26Z" | "2023-07-20T18:16:01Z" | "2023-07-20T18:16:00Z" | NONE | null | null | null | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6051/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6049/comments | https://api.github.com/repos/huggingface/datasets/issues/6049/events | https://github.com/huggingface/datasets/pull/6049 | 1,810,378,706 | PR_kwDODunzps5Vz1pd | 6,049 | Update `ruff` version in pre-commit config | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | 2 | "2023-07-18T17:13:50Z" | "2023-12-01T14:26:19Z" | "2023-12-01T14:26:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6049",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6049"
} | so that it corresponds to the one that is being run in CI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6049/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6048/comments | https://api.github.com/repos/huggingface/datasets/issues/6048/events | https://github.com/huggingface/datasets/issues/6048 | 1,809,629,346 | I_kwDODunzps5r3MCi | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | {
"avatar_url": "https://avatars.githubusercontent.com/u/137855591?v=4",
"events_url": "https://api.github.com/users/yangy1992/events{/privacy}",
"followers_url": "https://api.github.com/users/yangy1992/followers",
"following_url": "https://api.github.com/users/yangy1992/following{/other_user}",
"gists_url": "https://api.github.com/users/yangy1992/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yangy1992",
"id": 137855591,
"login": "yangy1992",
"node_id": "U_kgDOCDeCZw",
"organizations_url": "https://api.github.com/users/yangy1992/orgs",
"received_events_url": "https://api.github.com/users/yangy1992/received_events",
"repos_url": "https://api.github.com/users/yangy1992/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yangy1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangy1992/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yangy1992"
} | [] | closed | false | null | [] | null | 1 | "2023-07-18T10:16:34Z" | "2023-07-18T16:18:39Z" | "2023-07-18T16:18:39Z" | NONE | null | null | null | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6048/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6047/comments | https://api.github.com/repos/huggingface/datasets/issues/6047/events | https://github.com/huggingface/datasets/pull/6047 | 1,809,627,947 | PR_kwDODunzps5VxRLA | 6,047 | Bump dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-18T10:15:39Z" | "2023-07-18T10:28:01Z" | "2023-07-18T10:15:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6047.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6047",
"merged_at": "2023-07-18T10:15:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6047.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6047"
} | workaround to fix an issue with transformers CI
https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6047/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6046/comments | https://api.github.com/repos/huggingface/datasets/issues/6046/events | https://github.com/huggingface/datasets/issues/6046 | 1,808,154,414 | I_kwDODunzps5rxj8u | 6,046 | Support proxy and user-agent in fsspec calls | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4",
"events_url": "https://api.github.com/users/zutarich/events{/privacy}",
"followers_url": "https://api.github.com/users/zutarich/followers",
"following_url": "https://api.github.com/users/zutarich/following{/other_user}",
"gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zutarich",
"id": 95092167,
"login": "zutarich",
"node_id": "U_kgDOBar9xw",
"organizations_url": "https://api.github.com/users/zutarich/orgs",
"received_events_url": "https://api.github.com/users/zutarich/received_events",
"repos_url": "https://api.github.com/users/zutarich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zutarich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zutarich"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4",
"events_url": "https://api.github.com/users/zutarich/events{/privacy}",
"followers_url": "https://api.github.com/users/zutarich/followers",
"following_url": "https://api.github.com/users/zutarich/following{/other_user}",
"gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zutarich",
"id": 95092167,
"login": "zutarich",
"node_id": "U_kgDOBar9xw",
"organizations_url": "https://api.github.com/users/zutarich/orgs",
"received_events_url": "https://api.github.com/users/zutarich/received_events",
"repos_url": "https://api.github.com/users/zutarich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zutarich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zutarich"
}
] | null | 8 | "2023-07-17T16:39:26Z" | "2023-10-09T13:49:14Z" | null | MEMBER | null | null | null | Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem` could support passing at least the proxies | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6046/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6045/comments | https://api.github.com/repos/huggingface/datasets/issues/6045/events | https://github.com/huggingface/datasets/pull/6045 | 1,808,072,270 | PR_kwDODunzps5Vr-r1 | 6,045 | Check if column names match in Parquet loader only when config `features` are specified | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 8 | "2023-07-17T15:50:15Z" | "2023-07-24T14:45:56Z" | "2023-07-24T14:35:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6045",
"merged_at": "2023-07-24T14:35:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6045"
} | Fix #6039 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6045/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6044/comments | https://api.github.com/repos/huggingface/datasets/issues/6044/events | https://github.com/huggingface/datasets/pull/6044 | 1,808,057,906 | PR_kwDODunzps5Vr7jr | 6,044 | Rename "pattern" to "path" in YAML data_files configs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 10 | "2023-07-17T15:41:16Z" | "2023-07-19T16:59:55Z" | "2023-07-19T16:48:06Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6044.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6044",
"merged_at": "2023-07-19T16:48:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6044.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6044"
} | To make it easier to understand for users.
They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s>
Glob patterns are still supported though
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6044/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6044/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6043/comments | https://api.github.com/repos/huggingface/datasets/issues/6043/events | https://github.com/huggingface/datasets/issues/6043 | 1,807,771,750 | I_kwDODunzps5rwGhm | 6,043 | Compression kwargs have no effect when saving datasets as csv | {
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello"
} | [] | open | false | null | [] | null | 3 | "2023-07-17T13:19:21Z" | "2023-07-22T17:34:18Z" | null | NONE | null | null | null | ### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix.
### Steps to reproduce the bug
```python
# dataset is not compressed (but at least a warning is emitted)
import datasets
dataset = datasets.load_dataset("rotten_tomatoes", split="train")
dataset.to_csv("uncompressed.csv")
print(os.path.getsize("uncompressed.csv")) # 1008607
dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1})
print(os.path.getsize("compressed.csv.gz")) # 1008607
```
```shell
>>>
RuntimeWarning: compression has no effect when passing a non-binary object as input.
csv_str = batch.to_pandas().to_csv(
```
```python
# dataset is not compressed and no warnings are emitted
dataset.to_csv("compressed.csv.gz")
print(os.path.getsize("compressed.csv.gz")) # 1008607
# compare with
dataset.to_pandas().to_csv("pandas.csv.gz")
print(os.path.getsize("pandas.csv.gz")) # 418561
```
---
I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg.
### Expected behavior
The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'`
### Environment info
`datasets == 2.13.1`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6043/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6042/comments | https://api.github.com/repos/huggingface/datasets/issues/6042/events | https://github.com/huggingface/datasets/pull/6042 | 1,807,516,762 | PR_kwDODunzps5VqEyb | 6,042 | Fix unused DatasetInfosDict code in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-17T11:03:09Z" | "2023-07-18T16:17:52Z" | "2023-07-18T16:08:42Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6042",
"merged_at": "2023-07-18T16:08:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6042"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6042/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6042/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6041/comments | https://api.github.com/repos/huggingface/datasets/issues/6041/events | https://github.com/huggingface/datasets/pull/6041 | 1,807,441,055 | PR_kwDODunzps5Vp0GX | 6,041 | Flatten repository_structure docs on yaml | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-17T10:15:10Z" | "2023-07-17T10:24:51Z" | "2023-07-17T10:16:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6041",
"merged_at": "2023-07-17T10:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6041"
} | To have Splits, Configurations and Builder parameters at the same doc level | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6041/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6040/comments | https://api.github.com/repos/huggingface/datasets/issues/6040/events | https://github.com/huggingface/datasets/pull/6040 | 1,807,410,238 | PR_kwDODunzps5VptVf | 6,040 | Fix legacy_dataset_infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 3 | "2023-07-17T09:56:21Z" | "2023-07-17T10:24:34Z" | "2023-07-17T10:16:03Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6040.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6040",
"merged_at": "2023-07-17T10:16:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6040.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6040"
} | was causing transformers CI to fail
https://circleci.com/gh/huggingface/transformers/855105 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6040/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6040/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6039/comments | https://api.github.com/repos/huggingface/datasets/issues/6039/events | https://github.com/huggingface/datasets/issues/6039 | 1,806,508,451 | I_kwDODunzps5rrSGj | 6,039 | Loading column subset from parquet file produces error since version 2.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kklemon",
"id": 1430243,
"login": "kklemon",
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"repos_url": "https://api.github.com/users/kklemon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kklemon"
} | [] | closed | false | null | [] | null | 0 | "2023-07-16T09:13:07Z" | "2023-07-24T14:35:04Z" | "2023-07-24T14:35:04Z" | NONE | null | null | null | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
raise ValueError(
ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}'
```
This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset.
### Steps to reproduce the bug
```python
# Prepare some sample data
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.to_parquet('iris.parquet')
# ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
print(iris.columns)
# Load data with datasets
from datasets import load_dataset
# Load full parquet file
dataset = load_dataset('parquet', data_files='iris.parquet')
# Load column subset; throws error for datasets>=2.13
dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length'])
```
### Expected behavior
No error should be thrown and the given column subset should be loaded.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6039/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6038/comments | https://api.github.com/repos/huggingface/datasets/issues/6038/events | https://github.com/huggingface/datasets/issues/6038 | 1,805,960,244 | I_kwDODunzps5rpMQ0 | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | {
"avatar_url": "https://avatars.githubusercontent.com/u/53547009?v=4",
"events_url": "https://api.github.com/users/BaiMeiyingxue/events{/privacy}",
"followers_url": "https://api.github.com/users/BaiMeiyingxue/followers",
"following_url": "https://api.github.com/users/BaiMeiyingxue/following{/other_user}",
"gists_url": "https://api.github.com/users/BaiMeiyingxue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BaiMeiyingxue",
"id": 53547009,
"login": "BaiMeiyingxue",
"node_id": "MDQ6VXNlcjUzNTQ3MDA5",
"organizations_url": "https://api.github.com/users/BaiMeiyingxue/orgs",
"received_events_url": "https://api.github.com/users/BaiMeiyingxue/received_events",
"repos_url": "https://api.github.com/users/BaiMeiyingxue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BaiMeiyingxue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BaiMeiyingxue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BaiMeiyingxue"
} | [] | closed | false | null | [] | null | 1 | "2023-07-15T07:58:08Z" | "2023-07-24T11:54:15Z" | "2023-07-24T11:54:15Z" | NONE | null | null | null | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
# urls = _URLS[self.config.name]
data_dir = dl_manager.download_and_extract(_URLs)
print(data_dir)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["train"]),
"split": "train",
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.join(data_dir["dev"]),
"split": "dev",
},
),
]
```
and error occured
```
Traceback (most recent call last):
File "/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py", line 2, in <module>
dataset = load_dataset("./QA_script.py",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare
if str(split_generator.split_info.name).lower() == "all":
AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?
```
Could you help me? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6038/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6037/comments | https://api.github.com/repos/huggingface/datasets/issues/6037/events | https://github.com/huggingface/datasets/issues/6037 | 1,805,887,184 | I_kwDODunzps5ro6bQ | 6,037 | Documentation links to examples are broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
} | [] | closed | false | null | [] | null | 2 | "2023-07-15T04:54:50Z" | "2023-07-17T22:35:14Z" | "2023-07-17T15:10:32Z" | NONE | null | null | null | ### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data are in csv files)
### Steps to reproduce the bug
Click on links to examples from latest documentation
### Expected behavior
Links should be up to date - it might be more stable to link to https://huggingface.co/datasets/ag_news/blob/main/ag_news.py
### Environment info
dataset v1.2.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6037/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6036/comments | https://api.github.com/repos/huggingface/datasets/issues/6036/events | https://github.com/huggingface/datasets/pull/6036 | 1,805,138,898 | PR_kwDODunzps5ViKc4 | 6,036 | Deprecate search API | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | 9 | "2023-07-14T16:22:09Z" | "2023-09-07T16:44:32Z" | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6036",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6036"
} | The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed).
If we decide to deprecate/remove it, the following usage instances need to be addressed:
* [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency
* [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there
cc @huggingface/datasets @LysandreJik for the opinion | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6036/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6035/comments | https://api.github.com/repos/huggingface/datasets/issues/6035/events | https://github.com/huggingface/datasets/pull/6035 | 1,805,087,687 | PR_kwDODunzps5Vh_QR | 6,035 | Dataset representation | {
"avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4",
"events_url": "https://api.github.com/users/Ganryuu/events{/privacy}",
"followers_url": "https://api.github.com/users/Ganryuu/followers",
"following_url": "https://api.github.com/users/Ganryuu/following{/other_user}",
"gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ganryuu",
"id": 63643948,
"login": "Ganryuu",
"node_id": "MDQ6VXNlcjYzNjQzOTQ4",
"organizations_url": "https://api.github.com/users/Ganryuu/orgs",
"received_events_url": "https://api.github.com/users/Ganryuu/received_events",
"repos_url": "https://api.github.com/users/Ganryuu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ganryuu"
} | [] | open | false | null | [] | null | 1 | "2023-07-14T15:42:37Z" | "2023-07-19T19:41:35Z" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6035",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6035"
} | __repr__ and _repr_html_ now both are similar to that of Polars | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6035/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6034/comments | https://api.github.com/repos/huggingface/datasets/issues/6034/events | https://github.com/huggingface/datasets/issues/6034 | 1,804,501,361 | I_kwDODunzps5rjoFx | 6,034 | load_dataset hangs on WSL | {
"avatar_url": "https://avatars.githubusercontent.com/u/20140522?v=4",
"events_url": "https://api.github.com/users/Andy-Zhou2/events{/privacy}",
"followers_url": "https://api.github.com/users/Andy-Zhou2/followers",
"following_url": "https://api.github.com/users/Andy-Zhou2/following{/other_user}",
"gists_url": "https://api.github.com/users/Andy-Zhou2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Andy-Zhou2",
"id": 20140522,
"login": "Andy-Zhou2",
"node_id": "MDQ6VXNlcjIwMTQwNTIy",
"organizations_url": "https://api.github.com/users/Andy-Zhou2/orgs",
"received_events_url": "https://api.github.com/users/Andy-Zhou2/received_events",
"repos_url": "https://api.github.com/users/Andy-Zhou2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Andy-Zhou2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Andy-Zhou2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Andy-Zhou2"
} | [] | closed | false | null | [] | null | 3 | "2023-07-14T09:03:10Z" | "2023-07-14T14:48:29Z" | "2023-07-14T14:48:29Z" | NONE | null | null | null | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6034/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6033/comments | https://api.github.com/repos/huggingface/datasets/issues/6033/events | https://github.com/huggingface/datasets/issues/6033 | 1,804,482,051 | I_kwDODunzps5rjjYD | 6,033 | `map` function doesn't fully utilize `input_columns`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha"
} | [] | closed | false | null | [] | null | 0 | "2023-07-14T08:49:28Z" | "2023-07-14T09:16:04Z" | "2023-07-14T09:16:04Z" | NONE | null | null | null | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6033/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6032/comments | https://api.github.com/repos/huggingface/datasets/issues/6032/events | https://github.com/huggingface/datasets/issues/6032 | 1,804,358,679 | I_kwDODunzps5rjFQX | 6,032 | DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1"
} | [] | open | false | null | [] | null | 5 | "2023-07-14T07:22:55Z" | "2023-09-11T13:50:41Z" | null | NONE | null | null | null | ### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies in DownloadConfig.
2. Call `load_dataset_build` with download_config.
3. Inspect the call stack in HfApi.dataset_info.

### Expected behavior
DownloadConfig.proxies works for getting dataset_info.
### Environment info
https://github.com/huggingface/datasets/commit/406b2212263c0d33f267e35b917f410ff6b3bc00
Python 3.11.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6032/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6031/comments | https://api.github.com/repos/huggingface/datasets/issues/6031/events | https://github.com/huggingface/datasets/issues/6031 | 1,804,183,858 | I_kwDODunzps5riaky | 6,031 | Argument type for map function changes when using `input_columns` for `IterableDataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kwonmha",
"id": 8953934,
"login": "kwonmha",
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kwonmha"
} | [] | closed | false | null | [] | null | 1 | "2023-07-14T05:11:14Z" | "2023-07-14T14:44:15Z" | "2023-07-14T14:44:15Z" | NONE | null | null | null | ### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values.
It gives me an error saying
```
TypeError: tokenize() takes 1 positional argument but 3 were given.
```
The code below matters.
https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687
For example, `inputs = {"a":1, "b":2, "c":3}`.
If `self.input_coluns` is `None`,
`inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable.
`function_args` becomes `[{"a":1, "b":2, "c":3}]`
Otherwise, lets say `self.input_columns = ["a", "c"]`
`[inputs[col] for col in self.input_columns]` results in `[1, 3]`.
I think it should be `[{"a":1, "c":3}]`.
I want to ask if the resulting format is intended.
Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary.
But this is confusing to me.
Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]`
### Steps to reproduce the bug
Run `map` function of `IterableDataset` with `input_columns` argument.
### Expected behavior
`function_args` looks better to have same format.
I think it should be `[{"a":1, "c":3}]`.
### Environment info
dataset version: 2.12
python: 3.8 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6031/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6030/comments | https://api.github.com/repos/huggingface/datasets/issues/6030/events | https://github.com/huggingface/datasets/pull/6030 | 1,803,864,744 | PR_kwDODunzps5Vd0ZG | 6,030 | fixed typo in comment | {
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery"
} | [] | closed | false | null | [] | null | 2 | "2023-07-13T22:49:57Z" | "2023-07-14T14:21:58Z" | "2023-07-14T14:13:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6030",
"merged_at": "2023-07-14T14:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6030"
} | This mistake was a bit confusing, so I thought it was worth sending a PR over. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6030/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6030/timeline | null | null | true |
Subsets and Splits