url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.31k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2079/comments | https://api.github.com/repos/huggingface/datasets/issues/2079/events | https://github.com/huggingface/datasets/pull/2079 | 834,920,493 | MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5 | 2,079 | Refactorize Metric.compute signature to force keyword arguments only | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,079,950,000 | 1,616,513,504,000 | 1,616,513,504,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2079",
"html_url": "https://github.com/huggingface/datasets/pull/2079",
"diff_url": "https://github.com/huggingface/datasets/pull/2079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2079.patch",
"merged_at": 1616513504000
} | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2079/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2079/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2078/comments | https://api.github.com/repos/huggingface/datasets/issues/2078/events | https://github.com/huggingface/datasets/issues/2078 | 834,694,819 | MDU6SXNzdWU4MzQ2OTQ4MTk= | 2,078 | MemoryError when computing WER metric | {
"login": "diego-fustes",
"id": 5707233,
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/diego-fustes",
"html_url": "https://github.com/diego-fustes",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,067,005,000 | 1,619,857,909,000 | 1,617,693,643,000 | NONE | null | null | null | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module>
print(wer.compute(predictions=result["predicted"], references=result["target"]))
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute
return wer(references, predictions)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer
truth, hypothesis, truth_transform, hypothesis_transform, **kwargs
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures
H, S, D, I = _get_operation_counts(truth, hypothesis)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts
editops = Levenshtein.editops(source_string, destination_string)
MemoryError`
My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2078/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2077/comments | https://api.github.com/repos/huggingface/datasets/issues/2077/events | https://github.com/huggingface/datasets/pull/2077 | 834,649,536 | MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw | 2,077 | Bump huggingface_hub version | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,064,874,000 | 1,616,067,206,000 | 1,616,067,206,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2077",
"html_url": "https://github.com/huggingface/datasets/pull/2077",
"diff_url": "https://github.com/huggingface/datasets/pull/2077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2077.patch",
"merged_at": 1616067206000
} | `0.0.2 => 0.0.6` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2077/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2076/comments | https://api.github.com/repos/huggingface/datasets/issues/2076/events | https://github.com/huggingface/datasets/issues/2076 | 834,445,296 | MDU6SXNzdWU4MzQ0NDUyOTY= | 2,076 | Issue: Dataset download error | {
"login": "XuhuiZhou",
"id": 20436061,
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuhuiZhou",
"html_url": "https://github.com/XuhuiZhou",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,049,366,000 | 1,616,413,951,000 | null | NONE | null | null | null | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2076/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2075/comments | https://api.github.com/repos/huggingface/datasets/issues/2075/events | https://github.com/huggingface/datasets/issues/2075 | 834,301,246 | MDU6SXNzdWU4MzQzMDEyNDY= | 2,075 | ConnectionError: Couldn't reach common_voice.py | {
"login": "LifaSun",
"id": 6188893,
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LifaSun",
"html_url": "https://github.com/LifaSun",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,030,346,000 | 1,616,236,181,000 | 1,616,236,181,000 | NONE | null | null | null | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py
Version:
1.4.1
Thanks! @lhoestq @LysandreJik @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2075/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2074/comments | https://api.github.com/repos/huggingface/datasets/issues/2074/events | https://github.com/huggingface/datasets/pull/2074 | 834,268,463 | MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw | 2,074 | Fix size categories in YAML Tags | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,025,756,000 | 1,616,519,470,000 | 1,616,519,470,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2074",
"html_url": "https://github.com/huggingface/datasets/pull/2074",
"diff_url": "https://github.com/huggingface/datasets/pull/2074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2074.patch",
"merged_at": 1616519469000
} | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2074/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2073/comments | https://api.github.com/repos/huggingface/datasets/issues/2073/events | https://github.com/huggingface/datasets/pull/2073 | 834,192,501 | MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2 | 2,073 | Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,016,533,000 | 1,616,058,565,000 | 1,616,058,564,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2073",
"html_url": "https://github.com/huggingface/datasets/pull/2073",
"diff_url": "https://github.com/huggingface/datasets/pull/2073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2073.patch",
"merged_at": 1616058564000
} | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2073/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2072/comments | https://api.github.com/repos/huggingface/datasets/issues/2072/events | https://github.com/huggingface/datasets/pull/2072 | 834,054,837 | MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4 | 2,072 | Fix docstring issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,616,004,824,000 | 1,616,574,057,000 | 1,616,071,281,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch",
"merged_at": 1616071281000
} | Fix docstring issues. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2072/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2071/comments | https://api.github.com/repos/huggingface/datasets/issues/2071/events | https://github.com/huggingface/datasets/issues/2071 | 833,950,824 | MDU6SXNzdWU4MzM5NTA4MjQ= | 2,071 | Multiprocessing is slower than single process | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,997,338,000 | 1,616,058,623,000 | 1,616,058,623,000 | CONTRIBUTOR | null | null | null | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
elapsed = time.time() - now
print(elapsed)
```
Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2071/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2070/comments | https://api.github.com/repos/huggingface/datasets/issues/2070/events | https://github.com/huggingface/datasets/issues/2070 | 833,799,035 | MDU6SXNzdWU4MzM3OTkwMzU= | 2,070 | ArrowInvalid issue for squad v2 dataset | {
"login": "MichaelYxWang",
"id": 29818977,
"node_id": "MDQ6VXNlcjI5ODE4OTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelYxWang",
"html_url": "https://github.com/MichaelYxWang",
"followers_url": "https://api.github.com/users/MichaelYxWang/followers",
"following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelYxWang/orgs",
"repos_url": "https://api.github.com/users/MichaelYxWang/repos",
"events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelYxWang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,989,109,000 | 1,628,099,836,000 | 1,628,099,836,000 | NONE | null | null | null | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error:
`ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178`
My code is as follows:
```
def generate_candidate_questions(examples):
val_questions = examples["question"]
candididate_questions = random.sample(datasets["train"]["question"], len(val_questions))
candididate_questions = [x[:max_length] for x in candididate_questions]
return candididate_questions
def prepare_validation_features(examples, use_mixing=False):
pad_on_right = tokenizer.padding_side == "right"
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
if use_mixing:
candidate_questions = generate_candidate_questions(examples)
tokenized_candidates = tokenizer(
candidate_questions if pad_on_right else examples["context"],
examples["context"] if pad_on_right else candidate_questions,
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
tokenized_examples["example_id"] = []
if use_mixing:
tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"]
tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"]
tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"]
for i in range(len(tokenized_examples["input_ids"])):
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
lambda xs: prepare_validation_features(xs, True),
batched=True,
remove_columns=datasets["validation"].column_names
)
```
I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2070/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2069/comments | https://api.github.com/repos/huggingface/datasets/issues/2069/events | https://github.com/huggingface/datasets/pull/2069 | 833,768,926 | MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw | 2,069 | Add and fix docstring for NamedSplit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,987,168,000 | 1,616,063,260,000 | 1,616,063,260,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"merged_at": 1616063260000
} | Add and fix docstring for `NamedSplit`, which was missing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2069/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2068/comments | https://api.github.com/repos/huggingface/datasets/issues/2068/events | https://github.com/huggingface/datasets/issues/2068 | 833,602,832 | MDU6SXNzdWU4MzM2MDI4MzI= | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | {
"login": "sivakhno",
"id": 1651457,
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sivakhno",
"html_url": "https://github.com/sivakhno",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,975,467,000 | 1,623,646,050,000 | 1,623,646,050,000 | NONE | null | null | null | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2068/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2067/comments | https://api.github.com/repos/huggingface/datasets/issues/2067/events | https://github.com/huggingface/datasets/issues/2067 | 833,559,940 | MDU6SXNzdWU4MzM1NTk5NDA= | 2,067 | Multiprocessing windows error | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,972,348,000 | 1,628,099,948,000 | 1,628,099,948,000 | NONE | null | null | null | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2067/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2066/comments | https://api.github.com/repos/huggingface/datasets/issues/2066/events | https://github.com/huggingface/datasets/pull/2066 | 833,480,551 | MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz | 2,066 | Fix docstring rendering of Dataset/DatasetDict.from_csv args | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,965,790,000 | 1,615,972,881,000 | 1,615,972,881,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2066",
"html_url": "https://github.com/huggingface/datasets/pull/2066",
"diff_url": "https://github.com/huggingface/datasets/pull/2066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2066.patch",
"merged_at": 1615972881000
} | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2066/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2065/comments | https://api.github.com/repos/huggingface/datasets/issues/2065/events | https://github.com/huggingface/datasets/issues/2065 | 833,291,432 | MDU6SXNzdWU4MzMyOTE0MzI= | 2,065 | Only user permission of saved cache files, not group | {
"login": "lorr1",
"id": 57237365,
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorr1",
"html_url": "https://github.com/lorr1",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"repos_url": "https://api.github.com/users/lorr1/repos",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,940,422,000 | 1,620,629,129,000 | 1,620,629,129,000 | NONE | null | null | null | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2065/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2064/comments | https://api.github.com/repos/huggingface/datasets/issues/2064/events | https://github.com/huggingface/datasets/pull/2064 | 833,002,360 | MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1 | 2,064 | Fix ted_talks_iwslt version error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,913,025,000 | 1,615,917,608,000 | 1,615,917,608,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"merged_at": 1615917607000
} | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2064/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2063/comments | https://api.github.com/repos/huggingface/datasets/issues/2063/events | https://github.com/huggingface/datasets/pull/2063 | 832,993,705 | MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5 | 2,063 | [Common Voice] Adapt dataset script so that no manual data download is actually needed | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,912,424,000 | 1,615,974,172,000 | 1,615,974,157,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2063",
"html_url": "https://github.com/huggingface/datasets/pull/2063",
"diff_url": "https://github.com/huggingface/datasets/pull/2063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2063.patch",
"merged_at": 1615974157000
} | This PR changes the dataset script so that no manual data dir is needed anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2063/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2062/comments | https://api.github.com/repos/huggingface/datasets/issues/2062/events | https://github.com/huggingface/datasets/pull/2062 | 832,625,483 | MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz | 2,062 | docs: fix missing quotation | {
"login": "neal2018",
"id": 46561493,
"node_id": "MDQ6VXNlcjQ2NTYxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neal2018",
"html_url": "https://github.com/neal2018",
"followers_url": "https://api.github.com/users/neal2018/followers",
"following_url": "https://api.github.com/users/neal2018/following{/other_user}",
"gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neal2018/subscriptions",
"organizations_url": "https://api.github.com/users/neal2018/orgs",
"repos_url": "https://api.github.com/users/neal2018/repos",
"events_url": "https://api.github.com/users/neal2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/neal2018/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,889,274,000 | 1,615,972,917,000 | 1,615,972,917,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2062",
"html_url": "https://github.com/huggingface/datasets/pull/2062",
"diff_url": "https://github.com/huggingface/datasets/pull/2062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2062.patch",
"merged_at": 1615972916000
} | The json code misses a quote | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2062/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2061/comments | https://api.github.com/repos/huggingface/datasets/issues/2061/events | https://github.com/huggingface/datasets/issues/2061 | 832,596,228 | MDU6SXNzdWU4MzI1OTYyMjg= | 2,061 | Cannot load udpos subsets from xtreme dataset using load_dataset() | {
"login": "adzcodez",
"id": 55791365,
"node_id": "MDQ6VXNlcjU1NzkxMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adzcodez",
"html_url": "https://github.com/adzcodez",
"followers_url": "https://api.github.com/users/adzcodez/followers",
"following_url": "https://api.github.com/users/adzcodez/following{/other_user}",
"gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions",
"organizations_url": "https://api.github.com/users/adzcodez/orgs",
"repos_url": "https://api.github.com/users/adzcodez/repos",
"events_url": "https://api.github.com/users/adzcodez/events{/privacy}",
"received_events_url": "https://api.github.com/users/adzcodez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,887,133,000 | 1,624,017,251,000 | 1,624,017,250,000 | NONE | null | null | null | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error.
Reprex is:
`from datasets import load_dataset `
`dataset = load_dataset('xtreme', 'udpos.English')`
The error is:
`KeyError: '_'`
The full traceback is:
KeyError Traceback (most recent call last)
<ipython-input-5-7181359ea09d> in <module>
1 from datasets import load_dataset
----> 2 dataset = load_dataset('xtreme', 'udpos.English')
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
738
739 # Download and prepare data
--> 740 builder_instance.download_and_prepare(
741 download_config=download_config,
742 download_mode=download_mode,
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
576 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
577 if not downloaded_from_gcs:
--> 578 self._download_and_prepare(
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
654 try:
655 # Prepare split will record examples associated to the split
--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)
657 except OSError as e:
658 raise OSError(
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
978 ):
--> 979 example = self.info.features.encode_example(record)
980 writer.write(example)
981 finally:
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example)
946 def encode_example(self, example):
947 example = cast_to_python_objects(example)
--> 948 return encode_nested_example(self, example)
949
950 def encode_batch(self, batch):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
840 # Nested structures: we allow dict, list/tuples, sequences
841 if isinstance(schema, dict):
--> 842 return {
843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0)
841 if isinstance(schema, dict):
842 return {
--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
845 elif isinstance(schema, (list, tuple)):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 870 return schema.encode_example(obj)
871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
872 return obj
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data)
647 # If a string is given, convert to associated integer
648 if isinstance(example_data, str):
--> 649 example_data = self.str2int(example_data)
650
651 # Allowing -1 to mean no label.
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values)
605 if value not in self._str2int:
606 value = value.strip()
--> 607 output.append(self._str2int[str(value)])
608 else:
609 # No names provided, try to integerize
KeyError: '_'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2061/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2060/comments | https://api.github.com/repos/huggingface/datasets/issues/2060/events | https://github.com/huggingface/datasets/pull/2060 | 832,588,591 | MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx | 2,060 | Filtering refactor | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,886,610,000 | 1,634,116,144,000 | 1,634,116,143,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"merged_at": null
} | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2060/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2059/comments | https://api.github.com/repos/huggingface/datasets/issues/2059/events | https://github.com/huggingface/datasets/issues/2059 | 832,579,156 | MDU6SXNzdWU4MzI1NzkxNTY= | 2,059 | Error while following docs to load the `ted_talks_iwslt` dataset | {
"login": "ekdnam",
"id": 40426312,
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekdnam",
"html_url": "https://github.com/ekdnam",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,885,939,000 | 1,615,917,631,000 | 1,615,917,607,000 | NONE | null | null | null | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2059/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2058/comments | https://api.github.com/repos/huggingface/datasets/issues/2058/events | https://github.com/huggingface/datasets/issues/2058 | 832,159,844 | MDU6SXNzdWU4MzIxNTk4NDQ= | 2,058 | Is it possible to convert a `tfds` to HuggingFace `dataset`? | {
"login": "abarbosa94",
"id": 6608232,
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abarbosa94",
"html_url": "https://github.com/abarbosa94",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,839,527,000 | 1,615,839,527,000 | null | CONTRIBUTOR | null | null | null | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2058/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2057/comments | https://api.github.com/repos/huggingface/datasets/issues/2057/events | https://github.com/huggingface/datasets/pull/2057 | 832,120,522 | MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0 | 2,057 | update link to ZEST dataset | {
"login": "matt-peters",
"id": 619844,
"node_id": "MDQ6VXNlcjYxOTg0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt-peters",
"html_url": "https://github.com/matt-peters",
"followers_url": "https://api.github.com/users/matt-peters/followers",
"following_url": "https://api.github.com/users/matt-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions",
"organizations_url": "https://api.github.com/users/matt-peters/orgs",
"repos_url": "https://api.github.com/users/matt-peters/repos",
"events_url": "https://api.github.com/users/matt-peters/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt-peters/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,836,177,000 | 1,615,914,388,000 | 1,615,914,388,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch",
"merged_at": 1615914388000
} | Updating the link as the original one is no longer working. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2057/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2056/comments | https://api.github.com/repos/huggingface/datasets/issues/2056/events | https://github.com/huggingface/datasets/issues/2056 | 831,718,397 | MDU6SXNzdWU4MzE3MTgzOTc= | 2,056 | issue with opus100/en-fr dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,807,962,000 | 1,615,909,740,000 | 1,615,909,739,000 | NONE | null | null | null | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2056/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2055/comments | https://api.github.com/repos/huggingface/datasets/issues/2055/events | https://github.com/huggingface/datasets/issues/2055 | 831,684,312 | MDU6SXNzdWU4MzE2ODQzMTI= | 2,055 | is there a way to override a dataset object saved with save_to_disk? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,805,453,000 | 1,616,385,977,000 | 1,616,385,977,000 | NONE | null | null | null | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2055/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2054/comments | https://api.github.com/repos/huggingface/datasets/issues/2054/events | https://github.com/huggingface/datasets/issues/2054 | 831,597,665 | MDU6SXNzdWU4MzE1OTc2NjU= | 2,054 | Could not find file for ZEST dataset | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,799,518,000 | 1,620,034,224,000 | 1,620,034,224,000 | CONTRIBUTOR | null | null | null | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2054/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2053/comments | https://api.github.com/repos/huggingface/datasets/issues/2053/events | https://github.com/huggingface/datasets/pull/2053 | 831,151,728 | MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2 | 2,053 | Add bAbI QA tasks | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,727,079,000 | 1,617,021,708,000 | 1,617,021,708,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2053",
"html_url": "https://github.com/huggingface/datasets/pull/2053",
"diff_url": "https://github.com/huggingface/datasets/pull/2053.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2053.patch",
"merged_at": 1617021708000
} | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2053/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2052/comments | https://api.github.com/repos/huggingface/datasets/issues/2052/events | https://github.com/huggingface/datasets/issues/2052 | 831,135,704 | MDU6SXNzdWU4MzExMzU3MDQ= | 2,052 | Timit_asr dataset repeats examples | {
"login": "fermaat",
"id": 7583522,
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fermaat",
"html_url": "https://github.com/fermaat",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"repos_url": "https://api.github.com/users/fermaat/repos",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,722,223,000 | 1,615,804,636,000 | 1,615,804,636,000 | NONE | null | null | null | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']
#['Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
```
The same behavior happens for other columns
Expected behavior:
Different info on the actual timit_asr dataset
Actual behavior:
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different
Debug info
Streamlit version: (get it with $ streamlit version)
Python version: Python 3.6.12
Using Conda? PipEnv? PyEnv? Pex? Using pip
OS version: Centos-release-7-9.2009.1.el7.centos.x86_64
Additional information
You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2052/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2051/comments | https://api.github.com/repos/huggingface/datasets/issues/2051/events | https://github.com/huggingface/datasets/pull/2051 | 831,027,021 | MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1 | 2,051 | Add MDD Dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,680,065,000 | 1,616,152,544,000 | 1,616,149,919,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2051",
"html_url": "https://github.com/huggingface/datasets/pull/2051",
"diff_url": "https://github.com/huggingface/datasets/pull/2051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2051.patch",
"merged_at": 1616149919000
} | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
- **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf)
- **Data:** https://research.fb.com/downloads/babi/
- **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project".
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
**Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2051/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2050/comments | https://api.github.com/repos/huggingface/datasets/issues/2050/events | https://github.com/huggingface/datasets/issues/2050 | 831,006,551 | MDU6SXNzdWU4MzEwMDY1NTE= | 2,050 | Build custom dataset to fine-tune Wav2Vec2 | {
"login": "Omarnabk",
"id": 72882909,
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omarnabk",
"html_url": "https://github.com/Omarnabk",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,672,870,000 | 1,615,800,448,000 | 1,615,800,448,000 | NONE | null | null | null | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2050/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2049/comments | https://api.github.com/repos/huggingface/datasets/issues/2049/events | https://github.com/huggingface/datasets/pull/2049 | 830,978,687 | MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0 | 2,049 | Fix text-classification tags | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,665,102,000 | 1,615,909,666,000 | 1,615,909,666,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2049",
"html_url": "https://github.com/huggingface/datasets/pull/2049",
"diff_url": "https://github.com/huggingface/datasets/pull/2049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2049.patch",
"merged_at": 1615909666000
} | There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2049/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2048/comments | https://api.github.com/repos/huggingface/datasets/issues/2048/events | https://github.com/huggingface/datasets/issues/2048 | 830,953,431 | MDU6SXNzdWU4MzA5NTM0MzE= | 2,048 | github is not always available - probably need a back up | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,658,612,000 | 1,648,826,830,000 | 1,648,826,830,000 | MEMBER | null | null | null | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2048/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2047/comments | https://api.github.com/repos/huggingface/datasets/issues/2047/events | https://github.com/huggingface/datasets/pull/2047 | 830,626,430 | MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3 | 2,047 | Multilingual dIalogAct benchMark (miam) | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,590,175,000 | 1,616,495,794,000 | 1,616,150,833,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2047",
"html_url": "https://github.com/huggingface/datasets/pull/2047",
"diff_url": "https://github.com/huggingface/datasets/pull/2047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2047.patch",
"merged_at": 1616150833000
} | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2047/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2046/comments | https://api.github.com/repos/huggingface/datasets/issues/2046/events | https://github.com/huggingface/datasets/issues/2046 | 830,423,033 | MDU6SXNzdWU4MzA0MjMwMzM= | 2,046 | add_faisis_index gets very slow when doing it interatively | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,580,838,000 | 1,616,624,951,000 | 1,616,624,951,000 | NONE | null | null | null | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster?
@lhoestq
```
def training_step(self, batch, batch_idx) -> Dict:
if (not batch_idx==0) and (batch_idx%5==0):
print("******************************************************")
ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder
model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU
model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff
list_of_gpus = ['cuda:2','cuda:3']
c_dir='/custom/cache/dir'
kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir)
print(kb_dataset)
n=len(list_of_gpus) #nunber of dedicated GPUs
kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]
#kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir')
print(self.trainer.global_rank)
dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])
output = [None for _ in list_of_gpus]
#self.trainer.accelerator_connector.accelerator.barrier("embedding_process")
dist.all_gather_object(output, dataset_shards)
#This creation and re-initlaization of the new index
if (self.trainer.global_rank==0): #saving will be done in the main process
combined_dataset = concatenate_datasets(output)
passages_path =self.config.passages_path
logger.info("saving the dataset with ")
#combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage')
combined_dataset.save_to_disk(passages_path)
logger.info("Add faiss index to the dataset that consist of embeddings")
embedding_dataset=combined_dataset
index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)
embedding_dataset.add_faiss_index("embeddings", custom_index=index)
embedding_dataset.get_index("embeddings").save(self.config.index_path)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2046/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2045/comments | https://api.github.com/repos/huggingface/datasets/issues/2045/events | https://github.com/huggingface/datasets/pull/2045 | 830,351,527 | MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz | 2,045 | Preserve column ordering in Dataset.rename_column | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,573,607,000 | 1,615,906,085,000 | 1,615,905,305,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch",
"merged_at": 1615905305000
} | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2045/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2044/comments | https://api.github.com/repos/huggingface/datasets/issues/2044/events | https://github.com/huggingface/datasets/pull/2044 | 830,339,905 | MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1 | 2,044 | Add CBT dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,572,259,000 | 1,616,152,213,000 | 1,616,149,755,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2044",
"html_url": "https://github.com/huggingface/datasets/pull/2044",
"diff_url": "https://github.com/huggingface/datasets/pull/2044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2044.patch",
"merged_at": 1616149755000
} | This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space.
Let me know in case of any issues. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2044/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2043/comments | https://api.github.com/repos/huggingface/datasets/issues/2043/events | https://github.com/huggingface/datasets/pull/2043 | 830,279,098 | MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz | 2,043 | Support pickle protocol for dataset splits defined as ReadInstruction | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,566,911,000 | 1,615,904,738,000 | 1,615,903,505,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch",
"merged_at": 1615903505000
} | Fixes #2022 (+ some style fixes) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2043/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2042/comments | https://api.github.com/repos/huggingface/datasets/issues/2042/events | https://github.com/huggingface/datasets/pull/2042 | 830,190,276 | MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3 | 2,042 | Fix arrow memory checks issue in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,560,592,000 | 1,615,561,463,000 | 1,615,561,462,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch",
"merged_at": 1615561462000
} | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.
Collecting the garbage collector before checking the arrow memory usage seems to fix this issue.
I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2042/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2041/comments | https://api.github.com/repos/huggingface/datasets/issues/2041/events | https://github.com/huggingface/datasets/pull/2041 | 830,180,803 | MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw | 2,041 | Doc2dial update data_infos and data_loaders | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,559,969,000 | 1,615,892,960,000 | 1,615,892,960,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2041",
"html_url": "https://github.com/huggingface/datasets/pull/2041",
"diff_url": "https://github.com/huggingface/datasets/pull/2041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2041.patch",
"merged_at": 1615892960000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2041/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2040/comments | https://api.github.com/repos/huggingface/datasets/issues/2040/events | https://github.com/huggingface/datasets/issues/2040 | 830,169,387 | MDU6SXNzdWU4MzAxNjkzODc= | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | {
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,559,220,000 | 1,628,100,043,000 | 1,628,100,043,000 | NONE | null | null | null | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2040/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2039/comments | https://api.github.com/repos/huggingface/datasets/issues/2039/events | https://github.com/huggingface/datasets/pull/2039 | 830,047,652 | MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3 | 2,039 | Doc2dial rc | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,550,188,000 | 1,615,563,156,000 | 1,615,563,156,000 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"merged_at": null
} | Added fix to handle the last turn that is a user turn. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2039/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | {
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,549,314,000 | 1,615,912,060,000 | 1,615,912,060,000 | CONTRIBUTOR | null | null | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2037/comments | https://api.github.com/repos/huggingface/datasets/issues/2037/events | https://github.com/huggingface/datasets/pull/2037 | 829,919,685 | MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz | 2,037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | {
"login": "miyamonz",
"id": 6331508,
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyamonz",
"html_url": "https://github.com/miyamonz",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,540,920,000 | 1,616,479,696,000 | 1,615,892,482,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch",
"merged_at": 1615892482000
} | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2037/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2036/comments | https://api.github.com/repos/huggingface/datasets/issues/2036/events | https://github.com/huggingface/datasets/issues/2036 | 829,909,258 | MDU6SXNzdWU4Mjk5MDkyNTg= | 2,036 | Cannot load wikitext | {
"login": "Gpwner",
"id": 19349207,
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gpwner",
"html_url": "https://github.com/Gpwner",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,540,179,000 | 1,615,797,902,000 | 1,615,797,884,000 | NONE | null | null | null | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2036/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2035/comments | https://api.github.com/repos/huggingface/datasets/issues/2035/events | https://github.com/huggingface/datasets/issues/2035 | 829,475,544 | MDU6SXNzdWU4Mjk0NzU1NDQ= | 2,035 | wiki40b/wikipedia for almost all languages cannot be downloaded | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,492,494,000 | 1,615,906,417,000 | null | NONE | null | null | null | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources.
thank you very much.
```
(fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py
Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...
Traceback (most recent call last):
File "test_data.py", line 3, in <module>
dataset = load_dataset("wiki40b", "cs")
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare
import apache_beam as beam
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module>
from apache_beam import io
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module>
from apache_beam.io.avroio import *
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module>
import avro
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module>
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource
NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2035/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2034/comments | https://api.github.com/repos/huggingface/datasets/issues/2034/events | https://github.com/huggingface/datasets/pull/2034 | 829,381,388 | MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw | 2,034 | Fix typo | {
"login": "pcyin",
"id": 3413464,
"node_id": "MDQ6VXNlcjM0MTM0NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcyin",
"html_url": "https://github.com/pcyin",
"followers_url": "https://api.github.com/users/pcyin/followers",
"following_url": "https://api.github.com/users/pcyin/following{/other_user}",
"gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcyin/subscriptions",
"organizations_url": "https://api.github.com/users/pcyin/orgs",
"repos_url": "https://api.github.com/users/pcyin/repos",
"events_url": "https://api.github.com/users/pcyin/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcyin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,484,773,000 | 1,615,485,985,000 | 1,615,485,985,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"merged_at": 1615485985000
} | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2034/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2033/comments | https://api.github.com/repos/huggingface/datasets/issues/2033/events | https://github.com/huggingface/datasets/pull/2033 | 829,295,339 | MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy | 2,033 | Raise an error for outdated sacrebleu versions | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,478,880,000 | 1,615,485,492,000 | 1,615,485,492,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2033",
"html_url": "https://github.com/huggingface/datasets/pull/2033",
"diff_url": "https://github.com/huggingface/datasets/pull/2033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2033.patch",
"merged_at": 1615485492000
} | The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12
For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py):
```python
def _compute(
self,
predictions,
references,
smooth_method="exp",
smooth_value=None,
force=False,
lowercase=False,
tokenize=scb.DEFAULT_TOKENIZER,
use_effective_order=False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> output = scb.corpus_bleu(
sys_stream=predictions,
ref_streams=transformed_references,
smooth_method=smooth_method,
smooth_value=smooth_value,
force=force,
lowercase=lowercase,
tokenize=tokenize,
use_effective_order=use_effective_order,
)
E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method'
/mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError
```
I improved the error message when users have an outdated version of sacrebleu.
The new error message tells the user to update sacrebleu.
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2033/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2032/comments | https://api.github.com/repos/huggingface/datasets/issues/2032/events | https://github.com/huggingface/datasets/issues/2032 | 829,250,912 | MDU6SXNzdWU4MjkyNTA5MTI= | 2,032 | Use Arrow filtering instead of writing a new arrow file for Dataset.filter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,475,930,000 | 1,615,483,257,000 | null | MEMBER | null | null | null | Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)`
- if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)`
The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table.
The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask.
Feel free to discuss this idea in this thread :)
One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle.
cc @theo-m @gchhablani
related issues: #1796 #1949 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2032/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2031/comments | https://api.github.com/repos/huggingface/datasets/issues/2031/events | https://github.com/huggingface/datasets/issues/2031 | 829,122,778 | MDU6SXNzdWU4MjkxMjI3Nzg= | 2,031 | wikipedia.py generator that extracts XML doesn't release memory | {
"login": "miyamonz",
"id": 6331508,
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyamonz",
"html_url": "https://github.com/miyamonz",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,467,084,000 | 1,616,402,032,000 | 1,616,402,032,000 | CONTRIBUTOR | null | null | null | I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502
`root.clear()` intend to clear memory, but it doesn't.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494
I replaced them with `elem.clear()`, then it seems to work correctly.
here is the notebook to reproduce it.
https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2031/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2030/comments | https://api.github.com/repos/huggingface/datasets/issues/2030/events | https://github.com/huggingface/datasets/pull/2030 | 829,110,803 | MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4 | 2,030 | Implement Dataset from text | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,466,090,000 | 1,616,074,169,000 | 1,616,074,169,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"merged_at": 1616074169000
} | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2030/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2029/comments | https://api.github.com/repos/huggingface/datasets/issues/2029/events | https://github.com/huggingface/datasets/issues/2029 | 829,097,290 | MDU6SXNzdWU4MjkwOTcyOTA= | 2,029 | Loading a faiss index KeyError | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,464,973,000 | 1,615,508,469,000 | 1,615,508,469,000 | NONE | null | null | null | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2029/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2028/comments | https://api.github.com/repos/huggingface/datasets/issues/2028/events | https://github.com/huggingface/datasets/pull/2028 | 828,721,393 | MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx | 2,028 | Adding PersiNLU reading-comprehension | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,437,673,000 | 1,615,801,197,000 | 1,615,801,197,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2028",
"html_url": "https://github.com/huggingface/datasets/pull/2028",
"diff_url": "https://github.com/huggingface/datasets/pull/2028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2028.patch",
"merged_at": 1615801197000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2028/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2027/comments | https://api.github.com/repos/huggingface/datasets/issues/2027/events | https://github.com/huggingface/datasets/pull/2027 | 828,490,444 | MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1 | 2,027 | Update format columns in Dataset.rename_columns | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,420,259,000 | 1,615,473,520,000 | 1,615,473,520,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2027",
"html_url": "https://github.com/huggingface/datasets/pull/2027",
"diff_url": "https://github.com/huggingface/datasets/pull/2027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2027.patch",
"merged_at": 1615473520000
} | Fixes #2026 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2027/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2026/comments | https://api.github.com/repos/huggingface/datasets/issues/2026/events | https://github.com/huggingface/datasets/issues/2026 | 828,194,467 | MDU6SXNzdWU4MjgxOTQ0Njc= | 2,026 | KeyError on using map after renaming a column | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,402,457,000 | 1,615,473,574,000 | 1,615,473,520,000 | CONTRIBUTOR | null | null | null | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2026/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2025/comments | https://api.github.com/repos/huggingface/datasets/issues/2025/events | https://github.com/huggingface/datasets/pull/2025 | 828,047,476 | MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz | 2,025 | [Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,395,647,000 | 1,617,115,613,000 | 1,616,777,519,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2025",
"html_url": "https://github.com/huggingface/datasets/pull/2025",
"diff_url": "https://github.com/huggingface/datasets/pull/2025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2025.patch",
"merged_at": 1616777518000
} | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2025/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2024/comments | https://api.github.com/repos/huggingface/datasets/issues/2024/events | https://github.com/huggingface/datasets/pull/2024 | 827,842,962 | MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy | 2,024 | Remove print statement from mnist.py | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,387,198,000 | 1,615,485,832,000 | 1,615,485,831,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2024",
"html_url": "https://github.com/huggingface/datasets/pull/2024",
"diff_url": "https://github.com/huggingface/datasets/pull/2024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2024.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2024/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2023/comments | https://api.github.com/repos/huggingface/datasets/issues/2023/events | https://github.com/huggingface/datasets/pull/2023 | 827,819,608 | MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2 | 2,023 | Add Romanian to XQuAD | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,386,272,000 | 1,615,802,897,000 | 1,615,802,897,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2023",
"html_url": "https://github.com/huggingface/datasets/pull/2023",
"diff_url": "https://github.com/huggingface/datasets/pull/2023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2023.patch",
"merged_at": 1615802897000
} | On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2023/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2022/comments | https://api.github.com/repos/huggingface/datasets/issues/2022/events | https://github.com/huggingface/datasets/issues/2022 | 827,435,033 | MDU6SXNzdWU4Mjc0MzUwMzM= | 2,022 | ValueError when rename_column on splitted dataset | {
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,369,238,000 | 1,615,903,568,000 | 1,615,903,505,000 | NONE | null | null | null | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_dataset(
path='csv', # use 'text' loading script to load from local txt-files
delimiter='\t', # xxx
data_files=text_files, # list of paths to local text files
split=split, # xxx
)
dataset
```
Part of output:
```python
DatasetDict({
train: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 900
})
test: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 100
})
})
```
Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:
```python
dataset['train'].rename_column('sentence', 'text')
```
```python
/usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name)
353 for split_name in split_names_from_instruction:
354 if not re.match(_split_re, split_name):
--> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.")
356
357 def __str__(self):
ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('.
```
In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.
Thanks in advance! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2022/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2021/comments | https://api.github.com/repos/huggingface/datasets/issues/2021/events | https://github.com/huggingface/datasets/issues/2021 | 826,988,016 | MDU6SXNzdWU4MjY5ODgwMTY= | 2,021 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,344,514,000 | 1,615,630,061,000 | 1,615,630,061,000 | NONE | null | null | null | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a serious issue with cashing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2021/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2020/comments | https://api.github.com/repos/huggingface/datasets/issues/2020/events | https://github.com/huggingface/datasets/pull/2020 | 826,961,126 | MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx | 2,020 | Remove unnecessary docstart check in conll-like datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,342,816,000 | 1,615,469,617,000 | 1,615,469,617,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2020",
"html_url": "https://github.com/huggingface/datasets/pull/2020",
"diff_url": "https://github.com/huggingface/datasets/pull/2020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2020.patch",
"merged_at": 1615469617000
} | Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2020/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2019/comments | https://api.github.com/repos/huggingface/datasets/issues/2019/events | https://github.com/huggingface/datasets/pull/2019 | 826,625,706 | MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy | 2,019 | Replace print with logging in dataset scripts | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,323,574,000 | 1,615,543,741,000 | 1,615,479,259,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2019",
"html_url": "https://github.com/huggingface/datasets/pull/2019",
"diff_url": "https://github.com/huggingface/datasets/pull/2019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2019.patch",
"merged_at": 1615479258000
} | Replaces `print(...)` in the dataset scripts with the library logger. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2019/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2018/comments | https://api.github.com/repos/huggingface/datasets/issues/2018/events | https://github.com/huggingface/datasets/pull/2018 | 826,473,764 | MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz | 2,018 | Md gender card update | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,316,240,000 | 1,615,570,260,000 | 1,615,570,260,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2018",
"html_url": "https://github.com/huggingface/datasets/pull/2018",
"diff_url": "https://github.com/huggingface/datasets/pull/2018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2018.patch",
"merged_at": 1615570260000
} | I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2018/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2017/comments | https://api.github.com/repos/huggingface/datasets/issues/2017/events | https://github.com/huggingface/datasets/pull/2017 | 826,428,578 | MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2 | 2,017 | Add TF-based Features to handle different modes of data | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,314,592,000 | 1,615,984,328,000 | 1,615,984,327,000 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2017",
"html_url": "https://github.com/huggingface/datasets/pull/2017",
"diff_url": "https://github.com/huggingface/datasets/pull/2017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2017.patch",
"merged_at": null
} | Hi,
I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2017/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2016/comments | https://api.github.com/repos/huggingface/datasets/issues/2016/events | https://github.com/huggingface/datasets/pull/2016 | 825,965,493 | MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz | 2,016 | Not all languages have 2 digit codes. | {
"login": "asiddhant",
"id": 13891775,
"node_id": "MDQ6VXNlcjEzODkxNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asiddhant",
"html_url": "https://github.com/asiddhant",
"followers_url": "https://api.github.com/users/asiddhant/followers",
"following_url": "https://api.github.com/users/asiddhant/following{/other_user}",
"gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions",
"organizations_url": "https://api.github.com/users/asiddhant/orgs",
"repos_url": "https://api.github.com/users/asiddhant/repos",
"events_url": "https://api.github.com/users/asiddhant/events{/privacy}",
"received_events_url": "https://api.github.com/users/asiddhant/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,298,019,000 | 1,615,485,663,000 | 1,615,485,663,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2016",
"html_url": "https://github.com/huggingface/datasets/pull/2016",
"diff_url": "https://github.com/huggingface/datasets/pull/2016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2016.patch",
"merged_at": 1615485663000
} | . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2016/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2015/comments | https://api.github.com/repos/huggingface/datasets/issues/2015/events | https://github.com/huggingface/datasets/pull/2015 | 825,942,108 | MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0 | 2,015 | Fix ipython function creation in tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,297,019,000 | 1,615,298,764,000 | 1,615,298,763,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"merged_at": 1615298763000
} | The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2015/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2014/comments | https://api.github.com/repos/huggingface/datasets/issues/2014/events | https://github.com/huggingface/datasets/pull/2014 | 825,916,531 | MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3 | 2,014 | more explicit method parameters | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,295,909,000 | 1,615,370,917,000 | 1,615,370,916,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2014",
"html_url": "https://github.com/huggingface/datasets/pull/2014",
"diff_url": "https://github.com/huggingface/datasets/pull/2014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2014.patch",
"merged_at": 1615370916000
} | re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2014/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2013/comments | https://api.github.com/repos/huggingface/datasets/issues/2013/events | https://github.com/huggingface/datasets/pull/2013 | 825,694,305 | MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx | 2,013 | Add Cryptonite dataset | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,285,931,000 | 1,615,318,027,000 | 1,615,318,026,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2013",
"html_url": "https://github.com/huggingface/datasets/pull/2013",
"diff_url": "https://github.com/huggingface/datasets/pull/2013.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2013.patch",
"merged_at": 1615318026000
} | cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2013/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2012/comments | https://api.github.com/repos/huggingface/datasets/issues/2012/events | https://github.com/huggingface/datasets/issues/2012 | 825,634,064 | MDU6SXNzdWU4MjU2MzQwNjQ= | 2,012 | No upstream branch | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,283,335,000 | 1,615,289,611,000 | 1,615,289,611,000 | CONTRIBUTOR | null | null | null | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2012/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2011/comments | https://api.github.com/repos/huggingface/datasets/issues/2011/events | https://github.com/huggingface/datasets/pull/2011 | 825,621,952 | MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx | 2,011 | Add RoSent Dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,282,808,000 | 1,615,485,652,000 | 1,615,485,652,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2011",
"html_url": "https://github.com/huggingface/datasets/pull/2011",
"diff_url": "https://github.com/huggingface/datasets/pull/2011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2011.patch",
"merged_at": 1615485652000
} | This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.
I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.
Let me know in case of any issues. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2011/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2010/comments | https://api.github.com/repos/huggingface/datasets/issues/2010/events | https://github.com/huggingface/datasets/issues/2010 | 825,567,635 | MDU6SXNzdWU4MjU1Njc2MzU= | 2,010 | Local testing fails | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,280,498,000 | 1,615,298,763,000 | 1,615,298,763,000 | CONTRIBUTOR | null | null | null | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes)
1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04)
```
Seems like a discrepancy with CI, perhaps a lib version that's not controlled?
Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2010/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2009/comments | https://api.github.com/repos/huggingface/datasets/issues/2009/events | https://github.com/huggingface/datasets/issues/2009 | 825,541,366 | MDU6SXNzdWU4MjU1NDEzNjY= | 2,009 | Ambiguous documentation | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,279,331,000 | 1,615,561,294,000 | 1,615,561,294,000 | CONTRIBUTOR | null | null | null | https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR with a clearer statement when I understand the meaning. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2009/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2008/comments | https://api.github.com/repos/huggingface/datasets/issues/2008/events | https://github.com/huggingface/datasets/pull/2008 | 825,153,804 | MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4 | 2,008 | Fix various typos/grammer in the docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,253,968,000 | 1,615,833,769,000 | 1,615,285,292,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2008",
"html_url": "https://github.com/huggingface/datasets/pull/2008",
"diff_url": "https://github.com/huggingface/datasets/pull/2008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2008.patch",
"merged_at": 1615285292000
} | This PR:
* fixes various typos/grammer I came across while reading the docs
* adds the "Install with conda" installation instructions
Closes #1959 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2008/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2007/comments | https://api.github.com/repos/huggingface/datasets/issues/2007/events | https://github.com/huggingface/datasets/issues/2007 | 824,518,158 | MDU6SXNzdWU4MjQ1MTgxNTg= | 2,007 | How to not load huggingface datasets into memory | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,206,926,000 | 1,628,100,145,000 | 1,628,100,145,000 | NONE | null | null | null | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2007/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2006/comments | https://api.github.com/repos/huggingface/datasets/issues/2006/events | https://github.com/huggingface/datasets/pull/2006 | 824,457,794 | MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2 | 2,006 | Don't gitignore dvc.lock | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,201,988,000 | 1,615,202,915,000 | 1,615,202,914,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2006",
"html_url": "https://github.com/huggingface/datasets/pull/2006",
"diff_url": "https://github.com/huggingface/datasets/pull/2006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2006.patch",
"merged_at": 1615202914000
} | The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of
```
ERROR: 'dvc.lock' is git-ignored.
```
I removed the dvc.lock file from the gitignore to fix that | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2006/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2005/comments | https://api.github.com/repos/huggingface/datasets/issues/2005/events | https://github.com/huggingface/datasets/issues/2005 | 824,275,035 | MDU6SXNzdWU4MjQyNzUwMzU= | 2,005 | Setting to torch format not working with torchvision and MNIST | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,189,091,000 | 1,615,312,693,000 | 1,615,312,693,000 | CONTRIBUTOR | null | null | null | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labels = []
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(
np.array(examples["image"][example_idx], dtype=np.uint8)
))
else:
images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8)))
labels.append(torch.tensor(examples["label"][example_idx]))
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('mnist')
train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000)
train_dataset.set_format("torch",columns=["image","label"])
```
After this, I check the type of the following:
```python
print(type(train_dataset["train"]["label"]))
print(type(train_dataset["train"]["image"][0]))
```
This leads to the following output:
```python
<class 'torch.Tensor'>
<class 'list'>
```
I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`.
I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue?
Thanks,
Gunjan
EDIT:
I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28).
EDIT 2:
Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2005/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2004/comments | https://api.github.com/repos/huggingface/datasets/issues/2004/events | https://github.com/huggingface/datasets/pull/2004 | 824,080,760 | MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1 | 2,004 | LaRoSeDa | {
"login": "MihaelaGaman",
"id": 6823177,
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MihaelaGaman",
"html_url": "https://github.com/MihaelaGaman",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,165,592,000 | 1,615,977,800,000 | 1,615,977,800,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2004",
"html_url": "https://github.com/huggingface/datasets/pull/2004",
"diff_url": "https://github.com/huggingface/datasets/pull/2004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2004.patch",
"merged_at": 1615977800000
} | Add LaRoSeDa to huggingface datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2004/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2003/comments | https://api.github.com/repos/huggingface/datasets/issues/2003/events | https://github.com/huggingface/datasets/issues/2003 | 824,034,678 | MDU6SXNzdWU4MjQwMzQ2Nzg= | 2,003 | Messages are being printed to the `stdout` | {
"login": "mahnerak",
"id": 1367529,
"node_id": "MDQ6VXNlcjEzNjc1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahnerak",
"html_url": "https://github.com/mahnerak",
"followers_url": "https://api.github.com/users/mahnerak/followers",
"following_url": "https://api.github.com/users/mahnerak/following{/other_user}",
"gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions",
"organizations_url": "https://api.github.com/users/mahnerak/orgs",
"repos_url": "https://api.github.com/users/mahnerak/repos",
"events_url": "https://api.github.com/users/mahnerak/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahnerak/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,154,974,000 | 1,615,830,467,000 | null | NONE | null | null | null | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`.
In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2003/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2002/comments | https://api.github.com/repos/huggingface/datasets/issues/2002/events | https://github.com/huggingface/datasets/pull/2002 | 823,955,744 | MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3 | 2,002 | MOROCO | {
"login": "MihaelaGaman",
"id": 6823177,
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MihaelaGaman",
"html_url": "https://github.com/MihaelaGaman",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,134,137,000 | 1,616,147,526,000 | 1,616,147,526,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2002",
"html_url": "https://github.com/huggingface/datasets/pull/2002",
"diff_url": "https://github.com/huggingface/datasets/pull/2002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2002.patch",
"merged_at": 1616147526000
} | Add MOROCO to huggingface datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2002/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2001/comments | https://api.github.com/repos/huggingface/datasets/issues/2001/events | https://github.com/huggingface/datasets/issues/2001 | 823,946,706 | MDU6SXNzdWU4MjM5NDY3MDY= | 2,001 | Empty evidence document ("provenance") in KILT ELI5 dataset | {
"login": "donggyukimc",
"id": 16605764,
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donggyukimc",
"html_url": "https://github.com/donggyukimc",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,131,695,000 | 1,615,960,261,000 | 1,615,960,261,000 | NONE | null | null | null | In the original KILT benchmark(https://github.com/facebookresearch/KILT),
all samples has its evidence document (i.e. wikipedia page id) for prediction.
For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this
`{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}`
However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance.
`{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]}
`
should i perform other procedure to obtain evidence documents? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2001/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2000/comments | https://api.github.com/repos/huggingface/datasets/issues/2000/events | https://github.com/huggingface/datasets/issues/2000 | 823,899,910 | MDU6SXNzdWU4MjM4OTk5MTA= | 2,000 | Windows Permission Error (most recent version of datasets) | {
"login": "itsLuisa",
"id": 73881148,
"node_id": "MDQ6VXNlcjczODgxMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itsLuisa",
"html_url": "https://github.com/itsLuisa",
"followers_url": "https://api.github.com/users/itsLuisa/followers",
"following_url": "https://api.github.com/users/itsLuisa/following{/other_user}",
"gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions",
"organizations_url": "https://api.github.com/users/itsLuisa/orgs",
"repos_url": "https://api.github.com/users/itsLuisa/repos",
"events_url": "https://api.github.com/users/itsLuisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/itsLuisa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,118,128,000 | 1,615,293,777,000 | 1,615,293,777,000 | NONE | null | null | null | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance!
Luisa
My script:
```
import datasets
import csv
logger = datasets.logging.get_logger(__name__)
class SampleConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(SampleConfig, self).__init__(**kwargs)
class Sample(datasets.GeneratorBasedBuilder):
BUILDER_CONFIGS = [
SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"),
]
def _info(self):
return datasets.DatasetInfo(
description="Dataset with words and their POS-Tags",
features=datasets.Features(
{
"id": datasets.Value("string"),
"tokens": datasets.Sequence(datasets.Value("string")),
"pos_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=[
"''",
",",
"-LRB-",
"-RRB-",
".",
":",
"CC",
"CD",
"DT",
"EX",
"FW",
"HYPH",
"IN",
"JJ",
"JJR",
"JJS",
"MD",
"NN",
"NNP",
"NNPS",
"NNS",
"PDT",
"POS",
"PRP",
"PRP$",
"RB",
"RBR",
"RBS",
"RP",
"TO",
"UH",
"VB",
"VBD",
"VBG",
"VBN",
"VBP",
"VBZ",
"WDT",
"WP",
"WRB",
"``"
]
)
),
}
),
supervised_keys=None,
homepage="https://catalog.ldc.upenn.edu/LDC2011T03",
citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.",
)
def _split_generators(self, dl_manager):
loaded_files = dl_manager.download_and_extract(self.config.data_files)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}),
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]})
]
def _generate_examples(self, filepath):
logger.info("generating examples from = %s", filepath)
with open(filepath, encoding="cp1252") as f:
data = csv.reader(f, delimiter="\t")
ids = list()
tokens = list()
pos_tags = list()
for id_, line in enumerate(data):
#print(line)
if len(line) == 1:
if tokens:
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
ids = list()
tokens = list()
pos_tags = list()
else:
ids.append(line[0])
tokens.append(line[1])
pos_tags.append(line[2])
# last example
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
def main():
dataset = datasets.load_dataset(
"data_loading.py", data_files={
"train": "train.tsv",
"test": "test.tsv",
"val": "val.tsv"
}
)
#print(dataset)
if __name__=="__main__":
main()
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2000/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1999/comments | https://api.github.com/repos/huggingface/datasets/issues/1999/events | https://github.com/huggingface/datasets/pull/1999 | 823,753,591 | MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy | 1,999 | Add FashionMNIST dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,066,617,000 | 1,615,283,531,000 | 1,615,283,531,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1999",
"html_url": "https://github.com/huggingface/datasets/pull/1999",
"diff_url": "https://github.com/huggingface/datasets/pull/1999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1999.patch",
"merged_at": 1615283531000
} | This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1999/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1998/comments | https://api.github.com/repos/huggingface/datasets/issues/1998/events | https://github.com/huggingface/datasets/pull/1998 | 823,723,960 | MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4 | 1,998 | Add -DOCSTART- note to dataset card of conll-like datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,057,709,000 | 1,615,429,207,000 | 1,615,429,207,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1998",
"html_url": "https://github.com/huggingface/datasets/pull/1998",
"diff_url": "https://github.com/huggingface/datasets/pull/1998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1998.patch",
"merged_at": null
} | Closes #1983 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1998/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1997/comments | https://api.github.com/repos/huggingface/datasets/issues/1997/events | https://github.com/huggingface/datasets/issues/1997 | 823,679,465 | MDU6SXNzdWU4MjM2Nzk0NjU= | 1,997 | from datasets import MoleculeDataset, GEOMDataset | {
"login": "futianfan",
"id": 5087210,
"node_id": "MDQ6VXNlcjUwODcyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/futianfan",
"html_url": "https://github.com/futianfan",
"followers_url": "https://api.github.com/users/futianfan/followers",
"following_url": "https://api.github.com/users/futianfan/following{/other_user}",
"gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/futianfan/subscriptions",
"organizations_url": "https://api.github.com/users/futianfan/orgs",
"repos_url": "https://api.github.com/users/futianfan/repos",
"events_url": "https://api.github.com/users/futianfan/events{/privacy}",
"received_events_url": "https://api.github.com/users/futianfan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,045,819,000 | 1,615,047,206,000 | 1,615,047,206,000 | NONE | null | null | null | I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1997/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1996/comments | https://api.github.com/repos/huggingface/datasets/issues/1996/events | https://github.com/huggingface/datasets/issues/1996 | 823,573,410 | MDU6SXNzdWU4MjM1NzM0MTA= | 1,996 | Error when exploring `arabic_speech_corpus` | {
"login": "elgeish",
"id": 6879673,
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgeish",
"html_url": "https://github.com/elgeish",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"repos_url": "https://api.github.com/users/elgeish/repos",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,615,010,120,000 | 1,615,288,345,000 | null | NONE | null | null | null | Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 233, in <module>
configs = get_confs(option)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs
module_path = nlp.load.prepare_module(path, dataset=True
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module
f"To be able to use this {module_type}, you need to install the following dependencies"
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1996/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1995/comments | https://api.github.com/repos/huggingface/datasets/issues/1995/events | https://github.com/huggingface/datasets/pull/1995 | 822,878,431 | MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0 | 1,995 | [Timit_asr] Make sure not only the first sample is used | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,933,771,000 | 1,625,034,353,000 | 1,614,934,739,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1995",
"html_url": "https://github.com/huggingface/datasets/pull/1995",
"diff_url": "https://github.com/huggingface/datasets/pull/1995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1995.patch",
"merged_at": 1614934739000
} | When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1995/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1995/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1994/comments | https://api.github.com/repos/huggingface/datasets/issues/1994/events | https://github.com/huggingface/datasets/issues/1994 | 822,871,238 | MDU6SXNzdWU4MjI4NzEyMzg= | 1,994 | not being able to get wikipedia es language | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,933,108,000 | 1,615,495,581,000 | null | NONE | null | null | null | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset
ignore_verifications=ignore_verifications,
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare
"\n\t`{}`".format(usage_example)
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
thanks @lhoestq for any suggestion/help | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1994/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1993/comments | https://api.github.com/repos/huggingface/datasets/issues/1993/events | https://github.com/huggingface/datasets/issues/1993 | 822,758,387 | MDU6SXNzdWU4MjI3NTgzODc= | 1,993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,921,950,000 | 1,616,385,950,000 | 1,616,385,950,000 | NONE | null | null | null | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1993/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1992/comments | https://api.github.com/repos/huggingface/datasets/issues/1992/events | https://github.com/huggingface/datasets/issues/1992 | 822,672,238 | MDU6SXNzdWU4MjI2NzIyMzg= | 1,992 | `datasets.map` multi processing much slower than single processing | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,910,202,000 | 1,626,689,109,000 | null | NONE | null | null | null | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer.
I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running.
What could be the reason? I would be happy to provide information necessary to spot the reason.
p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing.
p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work.

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1992/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1991/comments | https://api.github.com/repos/huggingface/datasets/issues/1991/events | https://github.com/huggingface/datasets/pull/1991 | 822,554,473 | MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx | 1,991 | Adding the conllpp dataset | {
"login": "ZihanWangKi",
"id": 21319243,
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihanWangKi",
"html_url": "https://github.com/ZihanWangKi",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,896,383,000 | 1,615,977,459,000 | 1,615,977,459,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1991",
"html_url": "https://github.com/huggingface/datasets/pull/1991",
"diff_url": "https://github.com/huggingface/datasets/pull/1991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1991.patch",
"merged_at": 1615977459000
} | Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1991/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1990/comments | https://api.github.com/repos/huggingface/datasets/issues/1990/events | https://github.com/huggingface/datasets/issues/1990 | 822,384,502 | MDU6SXNzdWU4MjIzODQ1MDI= | 1,990 | OSError: Memory mapping file failed: Cannot allocate memory | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,882,118,000 | 1,628,100,265,000 | 1,628,100,265,000 | NONE | null | null | null | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128
```
I am using transformer version: 4.3.2
But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset?
Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions:
```
File "run_mlm.py", line 441, in <module>
main()
File "run_mlm.py", line 233, in main
split=f"train[{data_args.validation_split_percentage}%:]",
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset
map_tuple=True,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table
stream = stream_from(filename)
File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1990/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1989/comments | https://api.github.com/repos/huggingface/datasets/issues/1989/events | https://github.com/huggingface/datasets/issues/1989 | 822,328,147 | MDU6SXNzdWU4MjIzMjgxNDc= | 1,989 | Question/problem with dataset labels | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,877,613,000 | 1,615,455,855,000 | null | NONE | null | null | null | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module>
main()
File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main
datasets = load_dataset("csv", data_files=data_files)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split
writer.write_table(table)
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table
pa_table = pa_table.cast(self._schema)
File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast
File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse
```
Any ideas how to fix this? For now, I'll probably make them numeric. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1989/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1988/comments | https://api.github.com/repos/huggingface/datasets/issues/1988/events | https://github.com/huggingface/datasets/issues/1988 | 822,324,605 | MDU6SXNzdWU4MjIzMjQ2MDU= | 1,988 | Readme.md is misleading about kinds of datasets? | {
"login": "surak",
"id": 878399,
"node_id": "MDQ6VXNlcjg3ODM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surak",
"html_url": "https://github.com/surak",
"followers_url": "https://api.github.com/users/surak/followers",
"following_url": "https://api.github.com/users/surak/following{/other_user}",
"gists_url": "https://api.github.com/users/surak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surak/subscriptions",
"organizations_url": "https://api.github.com/users/surak/orgs",
"repos_url": "https://api.github.com/users/surak/repos",
"events_url": "https://api.github.com/users/surak/events{/privacy}",
"received_events_url": "https://api.github.com/users/surak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,877,460,000 | 1,628,100,323,000 | 1,628,100,323,000 | NONE | null | null | null | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You mention other kinds of datasets, with images and so on. I'm confused.
Is it possible to use it to store, say, imagenet locally? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1988/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1987/comments | https://api.github.com/repos/huggingface/datasets/issues/1987/events | https://github.com/huggingface/datasets/issues/1987 | 822,308,956 | MDU6SXNzdWU4MjIzMDg5NTY= | 1,987 | wmt15 is broken | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,876,385,000 | 1,614,876,385,000 | null | MEMBER | null | null | null | While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:
```
python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")'
Downloading: 2.91kB [00:00, 818kB/s]
Downloading: 3.02kB [00:00, 897kB/s]
Downloading: 41.1kB [00:00, 19.1MB/s]
Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested
mapped = [
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1987/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1986/comments | https://api.github.com/repos/huggingface/datasets/issues/1986/events | https://github.com/huggingface/datasets/issues/1986 | 822,176,290 | MDU6SXNzdWU4MjIxNzYyOTA= | 1,986 | wmt datasets fail to load | {
"login": "sabania",
"id": 32322564,
"node_id": "MDQ6VXNlcjMyMzIyNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sabania",
"html_url": "https://github.com/sabania",
"followers_url": "https://api.github.com/users/sabania/followers",
"following_url": "https://api.github.com/users/sabania/following{/other_user}",
"gists_url": "https://api.github.com/users/sabania/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sabania/subscriptions",
"organizations_url": "https://api.github.com/users/sabania/orgs",
"repos_url": "https://api.github.com/users/sabania/repos",
"events_url": "https://api.github.com/users/sabania/events{/privacy}",
"received_events_url": "https://api.github.com/users/sabania/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,867,535,000 | 1,614,868,267,000 | 1,614,868,267,000 | NONE | null | null | null | ~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1986/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1985/comments | https://api.github.com/repos/huggingface/datasets/issues/1985/events | https://github.com/huggingface/datasets/pull/1985 | 822,170,651 | MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw | 1,985 | Optimize int precision | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,867,143,000 | 1,616,414,680,000 | 1,615,887,840,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1985",
"html_url": "https://github.com/huggingface/datasets/pull/1985",
"diff_url": "https://github.com/huggingface/datasets/pull/1985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1985.patch",
"merged_at": 1615887840000
} | Optimize int precision to reduce dataset file size.
Close #1973, close #1825, close #861. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1985/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1984/comments | https://api.github.com/repos/huggingface/datasets/issues/1984/events | https://github.com/huggingface/datasets/issues/1984 | 821,816,588 | MDU6SXNzdWU4MjE4MTY1ODg= | 1,984 | Add tests for WMT datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,840,402,000 | 1,614,840,402,000 | null | MEMBER | null | null | null | As requested in #1981, we need tests for WMT datasets, using dummy data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1984/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1983/comments | https://api.github.com/repos/huggingface/datasets/issues/1983/events | https://github.com/huggingface/datasets/issues/1983 | 821,746,008 | MDU6SXNzdWU4MjE3NDYwMDg= | 1,983 | The size of CoNLL-2003 is not consistant with the official release. | {
"login": "h-peng17",
"id": 39556019,
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h-peng17",
"html_url": "https://github.com/h-peng17",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,832,894,000 | 1,615,220,665,000 | null | NONE | null | null | null | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1983/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1982/comments | https://api.github.com/repos/huggingface/datasets/issues/1982/events | https://github.com/huggingface/datasets/pull/1982 | 821,448,791 | MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0 | 1,982 | Fix NestedDataStructure.data for empty dict | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,802,611,000 | 1,614,876,364,000 | 1,614,811,716,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1982",
"html_url": "https://github.com/huggingface/datasets/pull/1982",
"diff_url": "https://github.com/huggingface/datasets/pull/1982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1982.patch",
"merged_at": 1614811716000
} | Fix #1981 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1982/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1981/comments | https://api.github.com/repos/huggingface/datasets/issues/1981/events | https://github.com/huggingface/datasets/issues/1981 | 821,411,109 | MDU6SXNzdWU4MjE0MTExMDk= | 1,981 | wmt datasets fail to load | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,799,299,000 | 1,614,867,407,000 | 1,614,811,716,000 | MEMBER | null | null | null | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators
extraction_map = dict(downloaded_files, **manual_files)
```
it worked fine recently. same problem if I try wmt16.
git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa
@albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1981/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1980/comments | https://api.github.com/repos/huggingface/datasets/issues/1980/events | https://github.com/huggingface/datasets/pull/1980 | 821,312,810 | MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy | 1,980 | Loading all answers from drop | {
"login": "KaijuML",
"id": 25499439,
"node_id": "MDQ6VXNlcjI1NDk5NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaijuML",
"html_url": "https://github.com/KaijuML",
"followers_url": "https://api.github.com/users/KaijuML/followers",
"following_url": "https://api.github.com/users/KaijuML/following{/other_user}",
"gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions",
"organizations_url": "https://api.github.com/users/KaijuML/orgs",
"repos_url": "https://api.github.com/users/KaijuML/repos",
"events_url": "https://api.github.com/users/KaijuML/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaijuML/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,791,587,000 | 1,615,807,646,000 | 1,615,807,646,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1980",
"html_url": "https://github.com/huggingface/datasets/pull/1980",
"diff_url": "https://github.com/huggingface/datasets/pull/1980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1980.patch",
"merged_at": 1615807646000
} | Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.
Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.
Let me know if there is anything else I can do,
Clément | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1980/timeline | null | true |
Subsets and Splits