url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.12B
| node_id
stringlengths 18
32
| number
int64 1
6.65k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2139/comments | https://api.github.com/repos/huggingface/datasets/issues/2139/events | https://github.com/huggingface/datasets/issues/2139 | 843,662,613 | MDU6SXNzdWU4NDM2NjI2MTM= | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | {
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PedroMLF",
"id": 22480495,
"login": "PedroMLF",
"node_id": "MDQ6VXNlcjIyNDgwNDk1",
"organizations_url": "https://api.github.com/users/PedroMLF/orgs",
"received_events_url": "https://api.github.com/users/PedroMLF/received_events",
"repos_url": "https://api.github.com/users/PedroMLF/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PedroMLF"
} | [] | closed | false | null | [] | null | 2 | "2021-03-29T18:23:54Z" | "2021-03-30T09:12:53Z" | "2021-03-30T09:12:53Z" | NONE | null | null | null | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from datasets import ReadInstruction
data_1 = load_dataset(
"wikiann",
"en",
split="validation",
)
data_1.save_to_disk("temporary_path_1")
print("Save with regular split works.")
data_2 = load_dataset(
"wikiann",
"en",
split=ReadInstruction("validation", to=50, unit="%"),
)
data_2.save_to_disk("temporary_path_2")
```
and the corresponding output:
```
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Save with regular split works.
Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)
Traceback (most recent call last):
File "bug.py", line 20, in <module>
data_2.save_to_disk("temporary_path_2")
File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk
json.dump(state, state_file, indent=2, sort_keys=True)
File "/usr/lib/python3.7/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type ReadInstruction is not JSON serializable
```
Let me know if there is some misuse from my end.
Thanks in advance.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2139/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2138/comments | https://api.github.com/repos/huggingface/datasets/issues/2138/events | https://github.com/huggingface/datasets/pull/2138 | 843,508,402 | MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2 | 2,138 | Add CER metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4",
"events_url": "https://api.github.com/users/chutaklee/events{/privacy}",
"followers_url": "https://api.github.com/users/chutaklee/followers",
"following_url": "https://api.github.com/users/chutaklee/following{/other_user}",
"gists_url": "https://api.github.com/users/chutaklee/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chutaklee",
"id": 6931004,
"login": "chutaklee",
"node_id": "MDQ6VXNlcjY5MzEwMDQ=",
"organizations_url": "https://api.github.com/users/chutaklee/orgs",
"received_events_url": "https://api.github.com/users/chutaklee/received_events",
"repos_url": "https://api.github.com/users/chutaklee/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chutaklee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chutaklee/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chutaklee"
} | [] | closed | false | null | [] | null | 0 | "2021-03-29T15:52:27Z" | "2021-04-06T16:16:11Z" | "2021-04-06T07:14:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2138.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2138",
"merged_at": "2021-04-06T07:14:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2138.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2138"
} | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self):
refs = ['White House']
preds = ['white house']
# S = 2, D = 0, I = 0, N = 11, CER = 2 / 11
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.1818181818) < 1e-6)
def test_cer_whitespace(self):
refs = ['were wolf']
preds = ['werewolf']
# S = 0, D = 0, I = 1, N = 9, CER = 1 / 9
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.1111111) < 1e-6)
refs = ['werewolf']
preds = ['weae wolf']
# S = 1, D = 1, I = 0, N = 8, CER = 0.25
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.25) < 1e-6)
# consecutive whitespaces case 1
refs = ['were wolf']
preds = ['were wolf']
# S = 0, D = 0, I = 0, N = 9, CER = 0
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)
# consecutive whitespaces case 2
refs = ['were wolf']
preds = ['were wolf']
# S = 0, D = 0, I = 0, N = 9, CER = 0
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)
def test_cer_sub(self):
refs = ['werewolf']
preds = ['weaewolf']
# S = 1, D = 0, I = 0, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_del(self):
refs = ['werewolf']
preds = ['wereawolf']
# S = 0, D = 1, I = 0, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_insert(self):
refs = ['werewolf']
preds = ['wereolf']
# S = 0, D = 0, I = 1, N = 8, CER = 0.125
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)
def test_cer_equal(self):
refs = ['werewolf']
char_error_rate = cer.compute(predictions=refs, references=refs)
self.assertEqual(char_error_rate, 0.0)
def test_cer_list_of_seqs(self):
refs = ['werewolf', 'I am your father']
char_error_rate = cer.compute(predictions=refs, references=refs)
self.assertEqual(char_error_rate, 0.0)
refs = ['werewolf', 'I am your father', 'doge']
preds = ['werxwolf', 'I am your father', 'doge']
# S = 1, D = 0, I = 0, N = 28, CER = 1 / 28
char_error_rate = cer.compute(predictions=preds, references=refs)
self.assertTrue(abs(char_error_rate - 0.03571428) < 1e-6)
def test_cer_unicode(self):
ref = [u'我能吞下玻璃而不伤身体']
pred = [u' 能吞虾玻璃而 不霜身体啦']
# S = 3, D = 2, I = 0, N = 11
# CER = 5 / 11
char_error_rate = cer.compute(predictions=pred, references=ref)
self.assertTrue(abs(char_error_rate - 0.4545454545) < 1e-6)
ref = [u'我能吞', u'下玻璃而不伤身体']
pred = [u'我 能 吞 下 玻 璃', u'而不伤身体']
# S = 0, D = 5, I = 0, N = 11
# CER = 5 / 11
char_error_rate = cer.compute(predictions=pred, references=ref)
self.assertTrue(abs(char_error_rate - 0.454545454545) < 1e-6)
ref = [u'我能吞下玻璃而不伤身体']
char_error_rate = cer.compute(predictions=ref, references=ref)
self.assertFalse(char_error_rate, 0.0)
def test_cer_empty(self):
ref = ''
pred = 'Hypothesis'
with self.assertRaises(ValueError):
char_error_rate = cer.compute(predictions=pred, references=ref)
if __name__ == '__main__':
unittest.main()
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2138/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2137/comments | https://api.github.com/repos/huggingface/datasets/issues/2137/events | https://github.com/huggingface/datasets/pull/2137 | 843,502,835 | MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw | 2,137 | Fix missing infos from concurrent dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-29T15:46:12Z" | "2021-03-31T10:35:56Z" | "2021-03-31T10:35:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2137",
"merged_at": "2021-03-31T10:35:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2137"
} | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2137/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2136/comments | https://api.github.com/repos/huggingface/datasets/issues/2136/events | https://github.com/huggingface/datasets/pull/2136 | 843,492,015 | MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5 | 2,136 | fix dialogue action slot name and value | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120"
} | [] | closed | false | null | [] | null | 0 | "2021-03-29T15:34:13Z" | "2021-03-31T12:48:02Z" | "2021-03-31T12:48:01Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"merged_at": "2021-03-31T12:48:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2136"
} | fix #2128 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | 3 | "2021-03-29T10:47:50Z" | "2021-03-30T10:20:23Z" | "2021-03-30T10:20:23Z" | CONTRIBUTOR | null | null | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2134/comments | https://api.github.com/repos/huggingface/datasets/issues/2134/events | https://github.com/huggingface/datasets/issues/2134 | 843,242,849 | MDU6SXNzdWU4NDMyNDI4NDk= | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | {
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"events_url": "https://api.github.com/users/prokopCerny/events{/privacy}",
"followers_url": "https://api.github.com/users/prokopCerny/followers",
"following_url": "https://api.github.com/users/prokopCerny/following{/other_user}",
"gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prokopCerny",
"id": 5815801,
"login": "prokopCerny",
"node_id": "MDQ6VXNlcjU4MTU4MDE=",
"organizations_url": "https://api.github.com/users/prokopCerny/orgs",
"received_events_url": "https://api.github.com/users/prokopCerny/received_events",
"repos_url": "https://api.github.com/users/prokopCerny/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prokopCerny"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 6 | "2021-03-29T10:43:15Z" | "2021-05-03T17:59:21Z" | "2021-05-03T17:59:21Z" | NONE | null | null | null | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library.
So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method.
When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB).
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 80, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 75, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify
contexts_dataset.save_to_disk(chunked_path)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk
self = pickle.loads(pickle.dumps(self))
OverflowError: cannot serialize a bytes object larger than 4 GiB
```
From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository.
To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk.
Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that.
```
Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last):
File "./tokenize_and_chunkify_in_memory.py", line 94, in <module>
main()
File "./tokenize_and_chunkify_in_memory.py", line 89, in main
tokenize_and_chunkify(config)
File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify
contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get
raise self._value
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks
put(task)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce
save(state)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends
save(tmp[0])
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list
self._batch_appends(obj)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends
save(x)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple
save(element)
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes
self._write_large_bytes(BINBYTES + pack("<I", n), obj)
struct.error: 'I' format requires 0 <= number <= 4294967295
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2134/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 3 | "2021-03-29T09:03:09Z" | "2021-03-30T17:40:57Z" | "2021-03-30T17:40:57Z" | NONE | null | null | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2132/comments | https://api.github.com/repos/huggingface/datasets/issues/2132/events | https://github.com/huggingface/datasets/issues/2132 | 843,142,822 | MDU6SXNzdWU4NDMxNDI4MjI= | 2,132 | TydiQA dataset is mixed and is not split per language | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | open | false | null | [] | null | 3 | "2021-03-29T08:56:21Z" | "2021-04-04T09:57:15Z" | null | NONE | null | null | null | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this.
Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2132/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | {
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-yangz",
"id": 23011317,
"login": "andy-yangz",
"node_id": "MDQ6VXNlcjIzMDExMzE3",
"organizations_url": "https://api.github.com/users/andy-yangz/orgs",
"received_events_url": "https://api.github.com/users/andy-yangz/received_events",
"repos_url": "https://api.github.com/users/andy-yangz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-yangz"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | 3 | "2021-03-29T08:45:58Z" | "2021-04-10T11:08:55Z" | "2021-04-10T11:08:55Z" | NONE | null | null | null | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py", line 316, in <module>
73 | | main()
74 | | File "run_gpt.py", line 222, in main
75 | | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"])
76 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset
77 | | use_auth_token=use_auth_token,
78 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare
79 | | self.download_post_processing_resources(dl_manager)
80 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources
81 | | for split in self.info.splits:
82 | | TypeError: 'NoneType' object is not iterable
83 | | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
84 | | Traceback (most recent call last):
85 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
86 | | "__main__", mod_spec)
87 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
88 | | exec(code, run_globals)
89 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
90 | | main()
91 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
92 | | sigkill_handler(signal.SIGTERM, None) # not coming back
93 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
94 | | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
```
On worker 1 it loads the dataset well, however on worker 2 will get this error.
And I will meet this error from time to time, sometimes it just goes well. | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 5 | "2021-03-29T08:23:00Z" | "2021-08-27T14:44:18Z" | "2021-08-27T14:44:18Z" | NONE | null | null | null | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2129/comments | https://api.github.com/repos/huggingface/datasets/issues/2129/events | https://github.com/huggingface/datasets/issues/2129 | 843,033,656 | MDU6SXNzdWU4NDMwMzM2NTY= | 2,129 | How to train BERT model with next sentence prediction? | {
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jnishi",
"id": 836541,
"login": "jnishi",
"node_id": "MDQ6VXNlcjgzNjU0MQ==",
"organizations_url": "https://api.github.com/users/jnishi/orgs",
"received_events_url": "https://api.github.com/users/jnishi/received_events",
"repos_url": "https://api.github.com/users/jnishi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jnishi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jnishi"
} | [] | closed | false | null | [] | null | 4 | "2021-03-29T06:48:03Z" | "2021-04-01T04:58:40Z" | "2021-04-01T04:58:40Z" | NONE | null | null | null | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2129/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 1 | "2021-03-29T06:34:02Z" | "2021-03-31T12:48:01Z" | "2021-03-31T12:48:01Z" | CONTRIBUTOR | null | null | null | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2127/comments | https://api.github.com/repos/huggingface/datasets/issues/2127/events | https://github.com/huggingface/datasets/pull/2127 | 843,017,199 | MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3 | 2,127 | make documentation more clear to use different cloud storage | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | 0 | "2021-03-29T06:24:06Z" | "2021-03-29T12:16:24Z" | "2021-03-29T12:16:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"merged_at": "2021-03-29T12:16:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2127"
} | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2126/comments | https://api.github.com/repos/huggingface/datasets/issues/2126/events | https://github.com/huggingface/datasets/pull/2126 | 842,779,966 | MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-03-28T16:57:30Z" | "2021-03-29T09:27:14Z" | "2021-03-29T09:27:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"merged_at": "2021-03-29T09:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2126"
} | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2126/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}",
"gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kosuke-kitahara",
"id": 42398050,
"login": "kosuke-kitahara",
"node_id": "MDQ6VXNlcjQyMzk4MDUw",
"organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs",
"received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events",
"repos_url": "https://api.github.com/users/kosuke-kitahara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kosuke-kitahara"
} | [] | closed | false | null | [] | null | 2 | "2021-03-28T08:30:18Z" | "2021-03-28T12:29:25Z" | "2021-03-28T12:29:25Z" | NONE | null | null | null | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2124/comments | https://api.github.com/repos/huggingface/datasets/issues/2124/events | https://github.com/huggingface/datasets/issues/2124 | 842,627,729 | MDU6SXNzdWU4NDI2Mjc3Mjk= | 2,124 | Adding ScaNN library to do MIPS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | open | false | null | [] | null | 1 | "2021-03-28T00:07:00Z" | "2021-03-29T13:23:43Z" | null | NONE | null | null | null | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2124/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2123/comments | https://api.github.com/repos/huggingface/datasets/issues/2123/events | https://github.com/huggingface/datasets/issues/2123 | 842,577,285 | MDU6SXNzdWU4NDI1NzcyODU= | 2,123 | Problem downloading GEM wiki_auto_asset_turk dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4",
"events_url": "https://api.github.com/users/mille-s/events{/privacy}",
"followers_url": "https://api.github.com/users/mille-s/followers",
"following_url": "https://api.github.com/users/mille-s/following{/other_user}",
"gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mille-s",
"id": 29705940,
"login": "mille-s",
"node_id": "MDQ6VXNlcjI5NzA1OTQw",
"organizations_url": "https://api.github.com/users/mille-s/orgs",
"received_events_url": "https://api.github.com/users/mille-s/received_events",
"repos_url": "https://api.github.com/users/mille-s/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mille-s/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mille-s"
} | [] | closed | false | null | [] | null | 5 | "2021-03-27T18:41:28Z" | "2021-05-12T16:15:18Z" | "2021-05-12T16:15:17Z" | NONE | null | null | null | @yjernite
### Summary
I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.
### Steps to reproduce
Code snippet:
from datasets import load_dataset
#dataset = load_dataset('gem', 'web_nlg_en')
dataset = load_dataset('gem', 'wiki_auto_asset_turk')
```
**Expected behavior:**
I expect the dataset to start downloading (download bar appears and progresses toward 100%)
**Actual behavior:**
Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more:
Downloading: 36.6kB [00:00, 37.2MB/s]
Downloading: 41.7kB [00:00, ?B/s]
Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d...
### Is this a regression?
No, it was the first time I was trying to download this dataset (same for the other ones).
### Debug info
- Python version: Python 3.8.2
- OS version: Windows 10 Family | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2123/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-26T18:09:20Z" | "2021-08-04T18:11:59Z" | "2021-04-06T14:33:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"merged_at": "2021-04-06T14:33:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122"
} | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).
## Benchmark
Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows):
for the current implementation
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.018ms
Avg access time key=74004227 : 0.215ms
Avg access time key=range(74003204, 74004228) : 1.416ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.187ms
Avg access time key=74004227 : 6.642ms
Avg access time key=range(74003204, 74004228) : 90.941ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms
```
for the new one using interpolation search:
```python
>>> python speed.py
Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766
========================= Querying unshuffled bookcorpus =========================
Avg access time key=1 : 0.076ms
Avg access time key=74004227 : 0.056ms
Avg access time key=range(74003204, 74004228) : 1.807ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms
========================== Querying shuffled bookcorpus ==========================
Avg access time key=1 : 0.061ms
Avg access time key=74004227 : 0.058ms
Avg access time key=range(74003204, 74004228) : 22.166ms
Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms
```
The RandIter class is just an iterable of 1024 random indices from 0 to 74004228.
Here is also a plot showing the speed improvement depending on the dataset size:

## Implementation details:
- `datasets.table.Table` objects implement interpolation search for the `slice` method
- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.
- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search
- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.
- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`
## Checklist:
- [x] implement interpolation search
- [x] use `datasets.table.Table` in `Dataset` objects
- [x] update current tests
- [x] add tests for interpolation search
- [x] comments and docstring
- [x] add the benchmark to the CI
Fix #1803. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 7 | "2021-03-26T17:02:17Z" | "2021-05-10T13:17:18Z" | "2021-05-10T09:41:41Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"merged_at": "2021-05-10T09:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121"
} | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2120/comments | https://api.github.com/repos/huggingface/datasets/issues/2120/events | https://github.com/huggingface/datasets/issues/2120 | 841,954,521 | MDU6SXNzdWU4NDE5NTQ1MjE= | 2,120 | dataset viewer does not work anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | "2021-03-26T13:22:13Z" | "2021-03-26T15:52:22Z" | "2021-03-26T15:52:22Z" | NONE | null | null | null | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2120/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2119/comments | https://api.github.com/repos/huggingface/datasets/issues/2119/events | https://github.com/huggingface/datasets/pull/2119 | 841,567,199 | MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy | 2,119 | copy.deepcopy os.environ instead of copy | {
"avatar_url": "https://avatars.githubusercontent.com/u/5506053?v=4",
"events_url": "https://api.github.com/users/NihalHarish/events{/privacy}",
"followers_url": "https://api.github.com/users/NihalHarish/followers",
"following_url": "https://api.github.com/users/NihalHarish/following{/other_user}",
"gists_url": "https://api.github.com/users/NihalHarish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NihalHarish",
"id": 5506053,
"login": "NihalHarish",
"node_id": "MDQ6VXNlcjU1MDYwNTM=",
"organizations_url": "https://api.github.com/users/NihalHarish/orgs",
"received_events_url": "https://api.github.com/users/NihalHarish/received_events",
"repos_url": "https://api.github.com/users/NihalHarish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NihalHarish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NihalHarish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NihalHarish"
} | [] | closed | false | null | [] | null | 0 | "2021-03-26T03:58:38Z" | "2021-03-26T15:13:52Z" | "2021-03-26T15:13:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"merged_at": "2021-03-26T15:13:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2119"
} | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example.
Testing:
Tested the change on my terminal:
```
>>> import os
>>> x = deepcopy(os.environ)
>>> y = os.environ
>>> x is y
False
>>> isinstance(x, type(os.environ))
True
>>> z = os.environ.copy()
>>> isinstance(z, type(os.environ))
False
>>> isinstance(z, dict)
True
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2118/comments | https://api.github.com/repos/huggingface/datasets/issues/2118/events | https://github.com/huggingface/datasets/pull/2118 | 841,563,329 | MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx | 2,118 | Remove os.environ.copy in Dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 1 | "2021-03-26T03:48:17Z" | "2021-03-26T12:03:23Z" | "2021-03-26T12:00:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118"
} | Replace `os.environ.copy` with in-place modification
Fixes #2115 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2118/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2117/comments | https://api.github.com/repos/huggingface/datasets/issues/2117/events | https://github.com/huggingface/datasets/issues/2117 | 841,535,283 | MDU6SXNzdWU4NDE1MzUyODM= | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Frankie123421",
"id": 54012361,
"login": "Frankie123421",
"node_id": "MDQ6VXNlcjU0MDEyMzYx",
"organizations_url": "https://api.github.com/users/Frankie123421/orgs",
"received_events_url": "https://api.github.com/users/Frankie123421/received_events",
"repos_url": "https://api.github.com/users/Frankie123421/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Frankie123421"
} | [] | closed | false | null | [] | null | 3 | "2021-03-26T02:35:22Z" | "2021-08-25T21:44:05Z" | "2021-03-26T02:40:26Z" | NONE | null | null | null | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-7ab77a465d81> in <module>
1 actual_task = "mnli" if task == "mnli-mm" else task
2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task)
----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task)
~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
508 keep_in_memory=keep_in_memory,
509 experiment_id=experiment_id,
--> 510 **metric_init_kwargs,
511 )
512
TypeError: 'NoneType' object is not callable
Please help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2117/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2116/comments | https://api.github.com/repos/huggingface/datasets/issues/2116/events | https://github.com/huggingface/datasets/issues/2116 | 841,481,292 | MDU6SXNzdWU4NDE0ODEyOTI= | 2,116 | Creating custom dataset results in error while calling the map() function | {
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GeetDsa",
"id": 13940397,
"login": "GeetDsa",
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GeetDsa"
} | [] | closed | false | null | [] | null | 1 | "2021-03-26T00:37:46Z" | "2021-03-31T14:30:32Z" | "2021-03-31T14:30:32Z" | NONE | null | null | null | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the total number of samples"
return len(self.samples)
def __getitem__(self, index):
"Generates one sample of data"
# Select sample
# Load data and get label
samples = self.samples[index]
return samples
def preprocess_function_train(examples):
inputs = examples
labels = [example+tokenizer.eos_token for example in examples ]
inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True)
labels = tokenizer(labels, max_length=30, padding=True, truncation=True)
model_inputs = inputs
model_inputs["labels"] = labels["input_ids"]
print("about to return")
return model_inputs
##train["sentence"] is dataframe column
train_dataset = MyDataset(train['sentence'].values.tolist())
train_dataset = train_dataset.map(
preprocess_function,
batched = True,
batch_size=32
)
```
Stack trace of error:
```
Traceback (most recent call last):
File "dir/train_generate.py", line 362, in <module>
main()
File "dir/train_generate.py", line 245, in main
train_dataset = train_dataset.map(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map
return self._map_single(
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper
unformatted_columns = set(self.column_names) - set(self._format_columns or [])
File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names
return self._data.column_names
AttributeError: 'MyDataset' object has no attribute '_data'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2116/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2115/comments | https://api.github.com/repos/huggingface/datasets/issues/2115/events | https://github.com/huggingface/datasets/issues/2115 | 841,283,974 | MDU6SXNzdWU4NDEyODM5NzQ= | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | {
"avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4",
"events_url": "https://api.github.com/users/leleamol/events{/privacy}",
"followers_url": "https://api.github.com/users/leleamol/followers",
"following_url": "https://api.github.com/users/leleamol/following{/other_user}",
"gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leleamol",
"id": 19983848,
"login": "leleamol",
"node_id": "MDQ6VXNlcjE5OTgzODQ4",
"organizations_url": "https://api.github.com/users/leleamol/orgs",
"received_events_url": "https://api.github.com/users/leleamol/received_events",
"repos_url": "https://api.github.com/users/leleamol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leleamol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leleamol"
} | [] | closed | false | null | [] | null | 0 | "2021-03-25T20:29:19Z" | "2021-03-26T15:13:52Z" | "2021-03-26T15:13:52Z" | NONE | null | null | null | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes no keyword arguments
`
It looks like the following line in datasets.map implementation introduced this functionality.
https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421
Here is the test script to reproduce this error.
```
from datasets import load_dataset
from transformers import AutoTokenizer
import os
def test_train():
model_checkpoint = "distilgpt2"
datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
y = tokenizer(examples['text'], truncation=True, max_length=64)
return y
x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}")
print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}")
datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"])
print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}")
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}")
if __name__ == "__main__":
test_train()
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2115/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2114/comments | https://api.github.com/repos/huggingface/datasets/issues/2114/events | https://github.com/huggingface/datasets/pull/2114 | 841,207,878 | MDExOlB1bGxSZXF1ZXN0NjAwOTc1MTA3 | 2,114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | closed | false | null | [] | null | 2 | "2021-03-25T18:40:17Z" | "2021-03-31T10:38:50Z" | "2021-03-31T10:38:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2114.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2114",
"merged_at": "2021-03-31T10:38:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2114.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2114"
} | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2114/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2113/comments | https://api.github.com/repos/huggingface/datasets/issues/2113/events | https://github.com/huggingface/datasets/pull/2113 | 841,191,303 | MDExOlB1bGxSZXF1ZXN0NjAwOTYxMDEz | 2,113 | Implement Dataset as context manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 0 | "2021-03-25T18:18:30Z" | "2021-03-31T11:30:14Z" | "2021-03-31T08:30:11Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2113.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2113",
"merged_at": "2021-03-31T08:30:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2113.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2113"
} | When used as context manager, it would be safely deleted if some exception is raised.
This will avoid
> During handling of the above exception, another exception occurred: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2113/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2112/comments | https://api.github.com/repos/huggingface/datasets/issues/2112/events | https://github.com/huggingface/datasets/pull/2112 | 841,098,008 | MDExOlB1bGxSZXF1ZXN0NjAwODgyMjA0 | 2,112 | Support for legal NLP datasets (EURLEX and ECtHR cases) | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [] | closed | false | null | [] | null | 0 | "2021-03-25T16:24:17Z" | "2021-03-25T18:39:31Z" | "2021-03-25T18:34:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2112.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2112",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2112.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2112"
} | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2112/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2111/comments | https://api.github.com/repos/huggingface/datasets/issues/2111/events | https://github.com/huggingface/datasets/pull/2111 | 841,082,087 | MDExOlB1bGxSZXF1ZXN0NjAwODY4OTg5 | 2,111 | Compute WER metric iteratively | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 7 | "2021-03-25T16:06:48Z" | "2021-04-06T07:20:43Z" | "2021-04-06T07:20:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2111.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2111",
"merged_at": "2021-04-06T07:20:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2111.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2111"
} | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2111/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2110/comments | https://api.github.com/repos/huggingface/datasets/issues/2110/events | https://github.com/huggingface/datasets/pull/2110 | 840,794,995 | MDExOlB1bGxSZXF1ZXN0NjAwNjI1NDQ5 | 2,110 | Fix incorrect assertion in builder.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/2340721?v=4",
"events_url": "https://api.github.com/users/dreamgonfly/events{/privacy}",
"followers_url": "https://api.github.com/users/dreamgonfly/followers",
"following_url": "https://api.github.com/users/dreamgonfly/following{/other_user}",
"gists_url": "https://api.github.com/users/dreamgonfly/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dreamgonfly",
"id": 2340721,
"login": "dreamgonfly",
"node_id": "MDQ6VXNlcjIzNDA3MjE=",
"organizations_url": "https://api.github.com/users/dreamgonfly/orgs",
"received_events_url": "https://api.github.com/users/dreamgonfly/received_events",
"repos_url": "https://api.github.com/users/dreamgonfly/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dreamgonfly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dreamgonfly/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dreamgonfly"
} | [] | closed | false | null | [] | null | 2 | "2021-03-25T10:39:20Z" | "2021-04-12T13:33:03Z" | "2021-04-12T13:33:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2110.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2110",
"merged_at": "2021-04-12T13:33:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2110.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2110"
} | Fix incorrect num_examples comparison assertion in builder.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2110/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2109/comments | https://api.github.com/repos/huggingface/datasets/issues/2109/events | https://github.com/huggingface/datasets/pull/2109 | 840,746,598 | MDExOlB1bGxSZXF1ZXN0NjAwNTg1MzM5 | 2,109 | Add more issue templates and customize issue template chooser | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 2 | "2021-03-25T09:41:53Z" | "2021-04-19T06:20:11Z" | "2021-04-19T06:20:11Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2109.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2109",
"merged_at": "2021-04-19T06:20:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2109.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2109"
} | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.
~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~
With this PR:
- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`
- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2109/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2108/comments | https://api.github.com/repos/huggingface/datasets/issues/2108/events | https://github.com/huggingface/datasets/issues/2108 | 840,181,055 | MDU6SXNzdWU4NDAxODEwNTU= | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | 0 | "2021-03-24T21:32:16Z" | "2021-03-25T06:31:43Z" | null | NONE | null | null | null | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2108/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2107/comments | https://api.github.com/repos/huggingface/datasets/issues/2107/events | https://github.com/huggingface/datasets/pull/2107 | 839,495,825 | MDExOlB1bGxSZXF1ZXN0NTk5NTAxODE5 | 2,107 | Metadata validation | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
] | null | 5 | "2021-03-24T08:52:41Z" | "2021-04-26T08:27:14Z" | "2021-04-26T08:27:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2107",
"merged_at": "2021-04-26T08:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2107"
} | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2107/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2106/comments | https://api.github.com/repos/huggingface/datasets/issues/2106/events | https://github.com/huggingface/datasets/issues/2106 | 839,084,264 | MDU6SXNzdWU4MzkwODQyNjQ= | 2,106 | WMT19 Dataset for Kazakh-English is not formatted correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4",
"events_url": "https://api.github.com/users/trina731/events{/privacy}",
"followers_url": "https://api.github.com/users/trina731/followers",
"following_url": "https://api.github.com/users/trina731/following{/other_user}",
"gists_url": "https://api.github.com/users/trina731/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trina731",
"id": 22580542,
"login": "trina731",
"node_id": "MDQ6VXNlcjIyNTgwNTQy",
"organizations_url": "https://api.github.com/users/trina731/orgs",
"received_events_url": "https://api.github.com/users/trina731/received_events",
"repos_url": "https://api.github.com/users/trina731/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trina731/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trina731"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | 1 | "2021-03-23T20:14:47Z" | "2021-03-25T21:36:20Z" | null | NONE | null | null | null | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді.
>
> Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды.
>
> Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code
```
import datasets
from datasets import load_dataset
dataset = load_dataset('wmt19', 'kk-en')
for key in dataset['train']['translation']:
if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']:
print(key['en'])
print(key['kk'])
break
```
we get:
> 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды.
> The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.
which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one.
Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2106/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2105/comments | https://api.github.com/repos/huggingface/datasets/issues/2105/events | https://github.com/huggingface/datasets/issues/2105 | 839,059,226 | MDU6SXNzdWU4MzkwNTkyMjY= | 2,105 | Request to remove S2ORC dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4",
"events_url": "https://api.github.com/users/kyleclo/events{/privacy}",
"followers_url": "https://api.github.com/users/kyleclo/followers",
"following_url": "https://api.github.com/users/kyleclo/following{/other_user}",
"gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kyleclo",
"id": 13603748,
"login": "kyleclo",
"node_id": "MDQ6VXNlcjEzNjAzNzQ4",
"organizations_url": "https://api.github.com/users/kyleclo/orgs",
"received_events_url": "https://api.github.com/users/kyleclo/received_events",
"repos_url": "https://api.github.com/users/kyleclo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kyleclo"
} | [] | open | false | null | [] | null | 3 | "2021-03-23T19:43:06Z" | "2021-08-04T19:18:02Z" | null | NONE | null | null | null | Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2105/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2105/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2104/comments | https://api.github.com/repos/huggingface/datasets/issues/2104/events | https://github.com/huggingface/datasets/issues/2104 | 839,027,834 | MDU6SXNzdWU4MzkwMjc4MzQ= | 2,104 | Trouble loading wiki_movies | {
"avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4",
"events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}",
"followers_url": "https://api.github.com/users/adityaarunsinghal/followers",
"following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}",
"gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adityaarunsinghal",
"id": 35391599,
"login": "adityaarunsinghal",
"node_id": "MDQ6VXNlcjM1MzkxNTk5",
"organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs",
"received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events",
"repos_url": "https://api.github.com/users/adityaarunsinghal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adityaarunsinghal"
} | [] | closed | false | null | [] | null | 2 | "2021-03-23T18:59:54Z" | "2022-03-30T08:22:58Z" | "2022-03-30T08:22:58Z" | NONE | null | null | null | Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py`
Trying to do `python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wiki_movies \` also gives the same error.
Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2104/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2103/comments | https://api.github.com/repos/huggingface/datasets/issues/2103/events | https://github.com/huggingface/datasets/issues/2103 | 838,946,916 | MDU6SXNzdWU4Mzg5NDY5MTY= | 2,103 | citation, homepage, and license fields of `dataset_info.json` are duplicated many times | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 1 | "2021-03-23T17:18:09Z" | "2021-04-06T14:39:59Z" | "2021-04-06T14:39:59Z" | NONE | null | null | null | This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n
```
@lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2103/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2102/comments | https://api.github.com/repos/huggingface/datasets/issues/2102/events | https://github.com/huggingface/datasets/pull/2102 | 838,794,090 | MDExOlB1bGxSZXF1ZXN0NTk4OTEyNzUw | 2,102 | Move Dataset.to_csv to csv module | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | null | 0 | "2021-03-23T14:35:46Z" | "2021-03-24T14:07:35Z" | "2021-03-24T14:07:34Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2102.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2102",
"merged_at": "2021-03-24T14:07:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2102.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2102"
} | Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2102/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2101/comments | https://api.github.com/repos/huggingface/datasets/issues/2101/events | https://github.com/huggingface/datasets/pull/2101 | 838,586,184 | MDExOlB1bGxSZXF1ZXN0NTk4NzQzMDM4 | 2,101 | MIAM dataset - new citation details | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eusip",
"id": 1551356,
"login": "eusip",
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"repos_url": "https://api.github.com/users/eusip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eusip"
} | [] | closed | false | null | [] | null | 2 | "2021-03-23T10:41:23Z" | "2021-03-23T18:08:10Z" | "2021-03-23T18:08:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2101",
"merged_at": "2021-03-23T18:08:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2101"
} | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2101/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2100/comments | https://api.github.com/repos/huggingface/datasets/issues/2100/events | https://github.com/huggingface/datasets/pull/2100 | 838,574,631 | MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0 | 2,100 | Fix deprecated warning message and docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 3 | "2021-03-23T10:27:52Z" | "2021-03-24T08:19:41Z" | "2021-03-23T18:03:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2100.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2100",
"merged_at": "2021-03-23T18:03:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2100.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2100"
} | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2100/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2099/comments | https://api.github.com/repos/huggingface/datasets/issues/2099/events | https://github.com/huggingface/datasets/issues/2099 | 838,523,819 | MDU6SXNzdWU4Mzg1MjM4MTk= | 2,099 | load_from_disk takes a long time to load local dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/samsontmr",
"id": 15007950,
"login": "samsontmr",
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/samsontmr"
} | [] | closed | false | null | [] | null | 8 | "2021-03-23T09:28:37Z" | "2021-03-23T17:12:16Z" | "2021-03-23T17:12:16Z" | NONE | null | null | null | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).
Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?
Tagging @lhoestq since you seem to be working on these issues and PRs :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2099/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2098/comments | https://api.github.com/repos/huggingface/datasets/issues/2098/events | https://github.com/huggingface/datasets/issues/2098 | 838,447,959 | MDU6SXNzdWU4Mzg0NDc5NTk= | 2,098 | SQuAD version | {
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/h-peng17",
"id": 39556019,
"login": "h-peng17",
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/h-peng17"
} | [] | closed | false | null | [] | null | 2 | "2021-03-23T07:47:54Z" | "2021-03-26T09:48:54Z" | "2021-03-26T09:48:54Z" | NONE | null | null | null | Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2098/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2097/comments | https://api.github.com/repos/huggingface/datasets/issues/2097/events | https://github.com/huggingface/datasets/pull/2097 | 838,105,289 | MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3 | 2,097 | fixes issue #1110 by descending further if `obj["_type"]` is a dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dcfidalgo",
"id": 15979778,
"login": "dcfidalgo",
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dcfidalgo"
} | [] | closed | false | null | [] | null | 0 | "2021-03-22T21:00:55Z" | "2021-03-22T21:01:11Z" | "2021-03-22T21:01:11Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2097.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2097",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2097.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2097"
} | Check metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2097/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2096/comments | https://api.github.com/repos/huggingface/datasets/issues/2096/events | https://github.com/huggingface/datasets/issues/2096 | 838,038,379 | MDU6SXNzdWU4MzgwMzgzNzk= | 2,096 | CoNLL 2003 dataset not including German | {
"avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4",
"events_url": "https://api.github.com/users/rxian/events{/privacy}",
"followers_url": "https://api.github.com/users/rxian/followers",
"following_url": "https://api.github.com/users/rxian/following{/other_user}",
"gists_url": "https://api.github.com/users/rxian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rxian",
"id": 8406802,
"login": "rxian",
"node_id": "MDQ6VXNlcjg0MDY4MDI=",
"organizations_url": "https://api.github.com/users/rxian/orgs",
"received_events_url": "https://api.github.com/users/rxian/received_events",
"repos_url": "https://api.github.com/users/rxian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rxian"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 2 | "2021-03-22T19:23:56Z" | "2023-07-25T16:49:07Z" | "2023-07-25T16:49:07Z" | NONE | null | null | null | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of...
This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf).
## Adding a Dataset
- **Name:** CoNLL 2003 German
- **Paper:** https://www.aclweb.org/anthology/W03-0419/
- **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2096/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2093/comments | https://api.github.com/repos/huggingface/datasets/issues/2093/events | https://github.com/huggingface/datasets/pull/2093 | 837,209,211 | MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx | 2,093 | Fix: Allows a feature to be named "_type" | {
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dcfidalgo",
"id": 15979778,
"login": "dcfidalgo",
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dcfidalgo"
} | [] | closed | false | null | [] | null | 4 | "2021-03-21T23:21:57Z" | "2021-03-25T14:35:54Z" | "2021-03-25T14:35:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2093.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2093",
"merged_at": "2021-03-25T14:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2093.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2093"
} | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2093/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2092/comments | https://api.github.com/repos/huggingface/datasets/issues/2092/events | https://github.com/huggingface/datasets/issues/2092 | 836,984,043 | MDU6SXNzdWU4MzY5ODQwNDM= | 2,092 | How to disable making arrow tables in load_dataset ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4",
"events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}",
"followers_url": "https://api.github.com/users/Jeevesh8/followers",
"following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}",
"gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jeevesh8",
"id": 48825663,
"login": "Jeevesh8",
"node_id": "MDQ6VXNlcjQ4ODI1NjYz",
"organizations_url": "https://api.github.com/users/Jeevesh8/orgs",
"received_events_url": "https://api.github.com/users/Jeevesh8/received_events",
"repos_url": "https://api.github.com/users/Jeevesh8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jeevesh8"
} | [] | closed | false | null | [] | null | 7 | "2021-03-21T04:50:07Z" | "2022-06-01T16:49:52Z" | "2022-06-01T16:49:52Z" | NONE | null | null | null | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2092/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2091/comments | https://api.github.com/repos/huggingface/datasets/issues/2091/events | https://github.com/huggingface/datasets/pull/2091 | 836,831,403 | MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3 | 2,091 | Fix copy snippet in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | "2021-03-20T15:08:22Z" | "2021-03-24T08:20:50Z" | "2021-03-23T17:18:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2091.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2091",
"merged_at": "2021-03-23T17:18:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2091.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2091"
} | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2091/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2091/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2090/comments | https://api.github.com/repos/huggingface/datasets/issues/2090/events | https://github.com/huggingface/datasets/pull/2090 | 836,807,498 | MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy | 2,090 | Add machine translated multilingual STS benchmark dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | 6 | "2021-03-20T13:28:07Z" | "2021-03-29T13:24:42Z" | "2021-03-29T13:00:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2090.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2090",
"merged_at": "2021-03-29T13:00:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2090.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2090"
} | also see here https://github.com/PhilipMay/stsb-multi-mt | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2090/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2089/comments | https://api.github.com/repos/huggingface/datasets/issues/2089/events | https://github.com/huggingface/datasets/issues/2089 | 836,788,019 | MDU6SXNzdWU4MzY3ODgwMTk= | 2,089 | Add documentaton for dataset README.md files | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | 8 | "2021-03-20T11:44:38Z" | "2023-07-25T16:45:38Z" | "2023-07-25T16:45:37Z" | CONTRIBUTOR | null | null | null | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2089/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2088/comments | https://api.github.com/repos/huggingface/datasets/issues/2088/events | https://github.com/huggingface/datasets/pull/2088 | 836,763,733 | MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1 | 2,088 | change bibtex template to author instead of authors | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | 1 | "2021-03-20T09:23:44Z" | "2021-03-23T15:40:12Z" | "2021-03-23T15:40:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2088.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2088",
"merged_at": "2021-03-23T15:40:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2088.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2088"
} | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2088/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2087/comments | https://api.github.com/repos/huggingface/datasets/issues/2087/events | https://github.com/huggingface/datasets/pull/2087 | 836,587,392 | MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2 | 2,087 | Update metadata if dataset features are modified | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 4 | "2021-03-20T02:05:23Z" | "2021-04-09T09:25:33Z" | "2021-04-09T09:25:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2087.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2087",
"merged_at": "2021-04-09T09:25:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2087.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2087"
} | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2087/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2086/comments | https://api.github.com/repos/huggingface/datasets/issues/2086/events | https://github.com/huggingface/datasets/pull/2086 | 836,249,587 | MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz | 2,086 | change user permissions to -rw-r--r-- | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | 1 | "2021-03-19T18:14:56Z" | "2021-03-24T13:59:04Z" | "2021-03-24T13:59:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2086",
"merged_at": "2021-03-24T13:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2086"
} | Fix for #2065 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2086/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2085/comments | https://api.github.com/repos/huggingface/datasets/issues/2085/events | https://github.com/huggingface/datasets/pull/2085 | 835,870,994 | MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2 | 2,085 | Fix max_wait_time in requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-19T11:22:26Z" | "2021-03-23T15:36:38Z" | "2021-03-23T15:36:37Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2085.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2085",
"merged_at": "2021-03-23T15:36:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2085.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2085"
} | it was handled as a min time, not max cc @SBrandeis | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2085/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2084/comments | https://api.github.com/repos/huggingface/datasets/issues/2084/events | https://github.com/huggingface/datasets/issues/2084 | 835,750,671 | MDU6SXNzdWU4MzU3NTA2NzE= | 2,084 | CUAD - Contract Understanding Atticus Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | "2021-03-19T09:27:43Z" | "2021-04-16T08:50:44Z" | "2021-04-16T08:50:44Z" | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2084/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2083/comments | https://api.github.com/repos/huggingface/datasets/issues/2083/events | https://github.com/huggingface/datasets/issues/2083 | 835,695,425 | MDU6SXNzdWU4MzU2OTU0MjU= | 2,083 | `concatenate_datasets` throws error when changing the order of datasets to concatenate | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 1 | "2021-03-19T08:29:48Z" | "2021-04-09T09:25:33Z" | "2021-04-09T09:25:33Z" | MEMBER | null | null | null | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO.
Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2083/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2082/comments | https://api.github.com/repos/huggingface/datasets/issues/2082/events | https://github.com/huggingface/datasets/pull/2082 | 835,401,555 | MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0 | 2,082 | Updated card using information from data statement and datasheet | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [] | closed | false | null | [] | null | 0 | "2021-03-19T00:39:38Z" | "2021-03-19T14:29:09Z" | "2021-03-19T14:29:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2082",
"merged_at": "2021-03-19T14:29:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2082"
} | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics.
I'll email Eleftheria to see if she has any comments on the card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2082/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2081/comments | https://api.github.com/repos/huggingface/datasets/issues/2081/events | https://github.com/huggingface/datasets/pull/2081 | 835,112,968 | MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4 | 2,081 | Fix docstrings issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | "2021-03-18T18:11:01Z" | "2021-04-07T14:37:43Z" | "2021-04-07T14:37:43Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2081",
"merged_at": "2021-04-07T14:37:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2081"
} | Fix docstring issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2081/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2080/comments | https://api.github.com/repos/huggingface/datasets/issues/2080/events | https://github.com/huggingface/datasets/issues/2080 | 835,023,000 | MDU6SXNzdWU4MzUwMjMwMDA= | 2,080 | Multidimensional arrays in a Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4",
"events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}",
"followers_url": "https://api.github.com/users/vermouthmjl/followers",
"following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}",
"gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vermouthmjl",
"id": 3142085,
"login": "vermouthmjl",
"node_id": "MDQ6VXNlcjMxNDIwODU=",
"organizations_url": "https://api.github.com/users/vermouthmjl/orgs",
"received_events_url": "https://api.github.com/users/vermouthmjl/received_events",
"repos_url": "https://api.github.com/users/vermouthmjl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vermouthmjl"
} | [] | closed | false | null | [] | null | 2 | "2021-03-18T16:29:14Z" | "2021-03-25T12:46:53Z" | "2021-03-25T12:46:53Z" | NONE | null | null | null | Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`)
```
from datasets import Dataset
import pandas as pd
import numpy as np
dataset = pd.DataFrame({
'bbox': [
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),
np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
```
Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists.
```
import torch
from datasets import Dataset
import pandas as pd
dataset = pd.DataFrame({
'bbox': [
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]],
[[1,2,3,4],[1,2,3,4],[1,2,3,4]]
],
'input_ids': [1, 2, 3, 4]
})
dataset = Dataset.from_pandas(dataset)
def test(examples):
return {'bbbox': torch.Tensor(examples['bbox'])}
dataset = dataset.map(test)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
def test2(examples):
return {'bbbox': torch.stack(examples['bbox'])}
dataset = dataset.map(test2)
print(dataset[0]['bbox'])
print(dataset[0]['bbbox'])
```
Is is possible to support n-D arrays/tensors in datasets?
It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2080/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2079/comments | https://api.github.com/repos/huggingface/datasets/issues/2079/events | https://github.com/huggingface/datasets/pull/2079 | 834,920,493 | MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5 | 2,079 | Refactorize Metric.compute signature to force keyword arguments only | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 0 | "2021-03-18T15:05:50Z" | "2021-03-23T15:31:44Z" | "2021-03-23T15:31:44Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2079",
"merged_at": "2021-03-23T15:31:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2079"
} | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2079/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2078/comments | https://api.github.com/repos/huggingface/datasets/issues/2078/events | https://github.com/huggingface/datasets/issues/2078 | 834,694,819 | MDU6SXNzdWU4MzQ2OTQ4MTk= | 2,078 | MemoryError when computing WER metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diego-fustes",
"id": 5707233,
"login": "diego-fustes",
"node_id": "MDQ6VXNlcjU3MDcyMzM=",
"organizations_url": "https://api.github.com/users/diego-fustes/orgs",
"received_events_url": "https://api.github.com/users/diego-fustes/received_events",
"repos_url": "https://api.github.com/users/diego-fustes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diego-fustes"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 11 | "2021-03-18T11:30:05Z" | "2021-05-01T08:31:49Z" | "2021-04-06T07:20:43Z" | NONE | null | null | null | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module>
print(wer.compute(predictions=result["predicted"], references=result["target"]))
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute
return wer(references, predictions)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer
truth, hypothesis, truth_transform, hypothesis_transform, **kwargs
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures
H, S, D, I = _get_operation_counts(truth, hypothesis)
File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts
editops = Levenshtein.editops(source_string, destination_string)
MemoryError`
My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2078/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2077/comments | https://api.github.com/repos/huggingface/datasets/issues/2077/events | https://github.com/huggingface/datasets/pull/2077 | 834,649,536 | MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw | 2,077 | Bump huggingface_hub version | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | 1 | "2021-03-18T10:54:34Z" | "2021-03-18T11:33:26Z" | "2021-03-18T11:33:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2077",
"merged_at": "2021-03-18T11:33:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2077"
} | `0.0.2 => 0.0.6` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2077/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2076/comments | https://api.github.com/repos/huggingface/datasets/issues/2076/events | https://github.com/huggingface/datasets/issues/2076 | 834,445,296 | MDU6SXNzdWU4MzQ0NDUyOTY= | 2,076 | Issue: Dataset download error | {
"avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4",
"events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}",
"followers_url": "https://api.github.com/users/XuhuiZhou/followers",
"following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/XuhuiZhou",
"id": 20436061,
"login": "XuhuiZhou",
"node_id": "MDQ6VXNlcjIwNDM2MDYx",
"organizations_url": "https://api.github.com/users/XuhuiZhou/orgs",
"received_events_url": "https://api.github.com/users/XuhuiZhou/received_events",
"repos_url": "https://api.github.com/users/XuhuiZhou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/XuhuiZhou"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | 7 | "2021-03-18T06:36:06Z" | "2021-03-22T11:52:31Z" | null | NONE | null | null | null | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2076/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2075/comments | https://api.github.com/repos/huggingface/datasets/issues/2075/events | https://github.com/huggingface/datasets/issues/2075 | 834,301,246 | MDU6SXNzdWU4MzQzMDEyNDY= | 2,075 | ConnectionError: Couldn't reach common_voice.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LifaSun",
"id": 6188893,
"login": "LifaSun",
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LifaSun"
} | [] | closed | false | null | [] | null | 2 | "2021-03-18T01:19:06Z" | "2021-03-20T10:29:41Z" | "2021-03-20T10:29:41Z" | NONE | null | null | null | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py
Version:
1.4.1
Thanks! @lhoestq @LysandreJik @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2075/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2074/comments | https://api.github.com/repos/huggingface/datasets/issues/2074/events | https://github.com/huggingface/datasets/pull/2074 | 834,268,463 | MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw | 2,074 | Fix size categories in YAML Tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 9 | "2021-03-18T00:02:36Z" | "2021-03-23T17:11:10Z" | "2021-03-23T17:11:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2074",
"merged_at": "2021-03-23T17:11:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2074"
} | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for dataset in sorted(os.listdir('./datasets/')):
if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:
infos = {}
stats = {}
st = ''
with open(f'datasets/{dataset}/README.md') as f:
d = f.read()
start_dash = d.find('---') + 3
end_dash = d[start_dash:].find('---') + 3
rest_text = d[end_dash + 3:]
try:
full_yaml = OmegaConf.create(d[start_dash:end_dash])
readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)
except Exception as e:
print(e)
continue
try:
with open(f'datasets/{dataset}/dataset_infos.json') as f:
data = json.load(f)
except Exception as e:
print(e)
continue # Skip those without infos.
done_set = set([])
num_keys = len(data.keys())
for keys in data:
# dataset = load_dataset('opus100', f'{dirs}')
total = 0
for split in data[keys]['splits']:
total = total + data[keys]['splits'][split]['num_examples']
if total < 1000:
st += "- n<1K" + '\n'
infos[keys] = ["n<1K"]
elif total >= 1000 and total < 10000:
infos[keys] = ["1K<n<10K"]
elif total >= 10000 and total < 100000:
infos[keys] = ["10K<n<100K"]
elif total >= 100000 and total < 1000000:
infos[keys] = ["100K<n<1M"]
elif total >= 1000000 and total < 10000000:
infos[keys] = ["1M<n<10M"]
elif total >= 10000000 and total < 100000000:
infos[keys] = ["10M<n<100M"]
elif total >= 100000000 and total < 1000000000:
infos[keys] = ["100M<n<1B"]
elif total >= 1000000000 and total < 10000000000:
infos[keys] = ["1B<n<10B"]
elif total >= 10000000000 and total < 100000000000:
infos[keys] = ["10B<n<100B"]
elif total >= 100000000000 and total < 1000000000000:
infos[keys] = ["100B<n<1T"]
else:
infos[keys] = ["n>1T"]
done_set = done_set.union(infos[keys])
if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):
print('-' * 30)
print(done_set)
print(f"Changing Full YAML for {dataset}")
print(OmegaConf.to_yaml(full_yaml))
if len(done_set) == 1:
full_yaml['size_categories'] = list(done_set)
else:
full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])
full_yaml_string = OmegaConf.to_yaml(full_yaml)
print('-' * 30)
print(full_yaml_string)
inp = input('Do you wish to continue?(Y/N)')
if inp == 'Y':
with open(f'./datasets/{dataset}/README.md', 'w') as f:
f.write('---\n')
f.write(full_yaml_string)
f.write('---')
f.write(rest_text)
else:
break
```
Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.
EDIT:
It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.
EDIT:
I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2074/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2073/comments | https://api.github.com/repos/huggingface/datasets/issues/2073/events | https://github.com/huggingface/datasets/pull/2073 | 834,192,501 | MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2 | 2,073 | Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/philschmid",
"id": 32632186,
"login": "philschmid",
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"repos_url": "https://api.github.com/users/philschmid/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"type": "User",
"url": "https://api.github.com/users/philschmid"
} | [] | closed | false | null | [] | null | 0 | "2021-03-17T21:28:53Z" | "2021-03-18T09:09:25Z" | "2021-03-18T09:09:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2073.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2073",
"merged_at": "2021-03-18T09:09:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2073.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2073"
} | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2073/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2073/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2072/comments | https://api.github.com/repos/huggingface/datasets/issues/2072/events | https://github.com/huggingface/datasets/pull/2072 | 834,054,837 | MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4 | 2,072 | Fix docstring issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 2 | "2021-03-17T18:13:44Z" | "2021-03-24T08:20:57Z" | "2021-03-18T12:41:21Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"merged_at": "2021-03-18T12:41:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072"
} | Fix docstring issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2072/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2071/comments | https://api.github.com/repos/huggingface/datasets/issues/2071/events | https://github.com/huggingface/datasets/issues/2071 | 833,950,824 | MDU6SXNzdWU4MzM5NTA4MjQ= | 2,071 | Multiprocessing is slower than single process | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 1 | "2021-03-17T16:08:58Z" | "2021-03-18T09:10:23Z" | "2021-03-18T09:10:23Z" | CONTRIBUTOR | null | null | null | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1]))
except Exception as e:
print(f"cancelled: {e}")
elapsed = time.time() - now
print(elapsed)
```
Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2071/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2070/comments | https://api.github.com/repos/huggingface/datasets/issues/2070/events | https://github.com/huggingface/datasets/issues/2070 | 833,799,035 | MDU6SXNzdWU4MzM3OTkwMzU= | 2,070 | ArrowInvalid issue for squad v2 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4",
"events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}",
"followers_url": "https://api.github.com/users/MichaelYxWang/followers",
"following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MichaelYxWang",
"id": 29818977,
"login": "MichaelYxWang",
"node_id": "MDQ6VXNlcjI5ODE4OTc3",
"organizations_url": "https://api.github.com/users/MichaelYxWang/orgs",
"received_events_url": "https://api.github.com/users/MichaelYxWang/received_events",
"repos_url": "https://api.github.com/users/MichaelYxWang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MichaelYxWang"
} | [] | closed | false | null | [] | null | 1 | "2021-03-17T13:51:49Z" | "2021-08-04T17:57:16Z" | "2021-08-04T17:57:16Z" | NONE | null | null | null | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error:
`ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178`
My code is as follows:
```
def generate_candidate_questions(examples):
val_questions = examples["question"]
candididate_questions = random.sample(datasets["train"]["question"], len(val_questions))
candididate_questions = [x[:max_length] for x in candididate_questions]
return candididate_questions
def prepare_validation_features(examples, use_mixing=False):
pad_on_right = tokenizer.padding_side == "right"
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
if use_mixing:
candidate_questions = generate_candidate_questions(examples)
tokenized_candidates = tokenizer(
candidate_questions if pad_on_right else examples["context"],
examples["context"] if pad_on_right else candidate_questions,
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
tokenized_examples["example_id"] = []
if use_mixing:
tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"]
tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"]
tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"]
for i in range(len(tokenized_examples["input_ids"])):
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
validation_features = datasets["validation"].map(
lambda xs: prepare_validation_features(xs, True),
batched=True,
remove_columns=datasets["validation"].column_names
)
```
I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2070/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2069/comments | https://api.github.com/repos/huggingface/datasets/issues/2069/events | https://github.com/huggingface/datasets/pull/2069 | 833,768,926 | MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw | 2,069 | Add and fix docstring for NamedSplit | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 1 | "2021-03-17T13:19:28Z" | "2021-03-18T10:27:40Z" | "2021-03-18T10:27:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"merged_at": "2021-03-18T10:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069"
} | Add and fix docstring for `NamedSplit`, which was missing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2069/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2068/comments | https://api.github.com/repos/huggingface/datasets/issues/2068/events | https://github.com/huggingface/datasets/issues/2068 | 833,602,832 | MDU6SXNzdWU4MzM2MDI4MzI= | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | {
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sivakhno",
"id": 1651457,
"login": "sivakhno",
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sivakhno"
} | [] | closed | false | null | [] | null | 7 | "2021-03-17T10:04:27Z" | "2021-06-14T04:47:30Z" | "2021-06-14T04:47:30Z" | NONE | null | null | null | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2068/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2067/comments | https://api.github.com/repos/huggingface/datasets/issues/2067/events | https://github.com/huggingface/datasets/issues/2067 | 833,559,940 | MDU6SXNzdWU4MzM1NTk5NDA= | 2,067 | Multiprocessing windows error | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00"
} | [] | closed | false | null | [] | null | 10 | "2021-03-17T09:12:28Z" | "2021-08-04T17:59:08Z" | "2021-08-04T17:59:08Z" | CONTRIBUTOR | null | null | null | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2067/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2066/comments | https://api.github.com/repos/huggingface/datasets/issues/2066/events | https://github.com/huggingface/datasets/pull/2066 | 833,480,551 | MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz | 2,066 | Fix docstring rendering of Dataset/DatasetDict.from_csv args | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 0 | "2021-03-17T07:23:10Z" | "2021-03-17T09:21:21Z" | "2021-03-17T09:21:21Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2066.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2066",
"merged_at": "2021-03-17T09:21:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2066.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2066"
} | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2066/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2065/comments | https://api.github.com/repos/huggingface/datasets/issues/2065/events | https://github.com/huggingface/datasets/issues/2065 | 833,291,432 | MDU6SXNzdWU4MzMyOTE0MzI= | 2,065 | Only user permission of saved cache files, not group | {
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lorr1",
"id": 57237365,
"login": "lorr1",
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"repos_url": "https://api.github.com/users/lorr1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lorr1"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 26 | "2021-03-17T00:20:22Z" | "2023-03-31T12:17:06Z" | "2021-05-10T06:45:29Z" | NONE | null | null | null | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2065/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2064/comments | https://api.github.com/repos/huggingface/datasets/issues/2064/events | https://github.com/huggingface/datasets/pull/2064 | 833,002,360 | MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1 | 2,064 | Fix ted_talks_iwslt version error | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 0 | "2021-03-16T16:43:45Z" | "2021-03-16T18:00:08Z" | "2021-03-16T18:00:08Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"merged_at": "2021-03-16T18:00:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064"
} | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2064/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2063/comments | https://api.github.com/repos/huggingface/datasets/issues/2063/events | https://github.com/huggingface/datasets/pull/2063 | 832,993,705 | MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5 | 2,063 | [Common Voice] Adapt dataset script so that no manual data download is actually needed | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | 0 | "2021-03-16T16:33:44Z" | "2021-03-17T09:42:52Z" | "2021-03-17T09:42:37Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2063",
"merged_at": "2021-03-17T09:42:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2063"
} | This PR changes the dataset script so that no manual data dir is needed anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2063/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2062/comments | https://api.github.com/repos/huggingface/datasets/issues/2062/events | https://github.com/huggingface/datasets/pull/2062 | 832,625,483 | MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz | 2,062 | docs: fix missing quotation | {
"avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4",
"events_url": "https://api.github.com/users/neal2018/events{/privacy}",
"followers_url": "https://api.github.com/users/neal2018/followers",
"following_url": "https://api.github.com/users/neal2018/following{/other_user}",
"gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neal2018",
"id": 46561493,
"login": "neal2018",
"node_id": "MDQ6VXNlcjQ2NTYxNDkz",
"organizations_url": "https://api.github.com/users/neal2018/orgs",
"received_events_url": "https://api.github.com/users/neal2018/received_events",
"repos_url": "https://api.github.com/users/neal2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neal2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neal2018"
} | [] | closed | false | null | [] | null | 0 | "2021-03-16T10:07:54Z" | "2021-03-17T09:21:57Z" | "2021-03-17T09:21:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2062.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2062",
"merged_at": "2021-03-17T09:21:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2062.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2062"
} | The json code misses a quote | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2062/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2061/comments | https://api.github.com/repos/huggingface/datasets/issues/2061/events | https://github.com/huggingface/datasets/issues/2061 | 832,596,228 | MDU6SXNzdWU4MzI1OTYyMjg= | 2,061 | Cannot load udpos subsets from xtreme dataset using load_dataset() | {
"avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4",
"events_url": "https://api.github.com/users/adzcodez/events{/privacy}",
"followers_url": "https://api.github.com/users/adzcodez/followers",
"following_url": "https://api.github.com/users/adzcodez/following{/other_user}",
"gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adzcodez",
"id": 55791365,
"login": "adzcodez",
"node_id": "MDQ6VXNlcjU1NzkxMzY1",
"organizations_url": "https://api.github.com/users/adzcodez/orgs",
"received_events_url": "https://api.github.com/users/adzcodez/received_events",
"repos_url": "https://api.github.com/users/adzcodez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adzcodez"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 6 | "2021-03-16T09:32:13Z" | "2021-06-18T11:54:11Z" | "2021-06-18T11:54:10Z" | NONE | null | null | null | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error.
Reprex is:
`from datasets import load_dataset `
`dataset = load_dataset('xtreme', 'udpos.English')`
The error is:
`KeyError: '_'`
The full traceback is:
KeyError Traceback (most recent call last)
<ipython-input-5-7181359ea09d> in <module>
1 from datasets import load_dataset
----> 2 dataset = load_dataset('xtreme', 'udpos.English')
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
738
739 # Download and prepare data
--> 740 builder_instance.download_and_prepare(
741 download_config=download_config,
742 download_mode=download_mode,
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
576 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
577 if not downloaded_from_gcs:
--> 578 self._download_and_prepare(
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
654 try:
655 # Prepare split will record examples associated to the split
--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)
657 except OSError as e:
658 raise OSError(
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
978 ):
--> 979 example = self.info.features.encode_example(record)
980 writer.write(example)
981 finally:
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example)
946 def encode_example(self, example):
947 example = cast_to_python_objects(example)
--> 948 return encode_nested_example(self, example)
949
950 def encode_batch(self, batch):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
840 # Nested structures: we allow dict, list/tuples, sequences
841 if isinstance(schema, dict):
--> 842 return {
843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0)
841 if isinstance(schema, dict):
842 return {
--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
845 elif isinstance(schema, (list, tuple)):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 870 return schema.encode_example(obj)
871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
872 return obj
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data)
647 # If a string is given, convert to associated integer
648 if isinstance(example_data, str):
--> 649 example_data = self.str2int(example_data)
650
651 # Allowing -1 to mean no label.
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values)
605 if value not in self._str2int:
606 value = value.strip()
--> 607 output.append(self._str2int[str(value)])
608 else:
609 # No names provided, try to integerize
KeyError: '_'
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2061/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2060/comments | https://api.github.com/repos/huggingface/datasets/issues/2060/events | https://github.com/huggingface/datasets/pull/2060 | 832,588,591 | MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx | 2,060 | Filtering refactor | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
}
] | null | 10 | "2021-03-16T09:23:30Z" | "2023-09-24T09:52:57Z" | "2021-10-13T09:09:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060"
} | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2060/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2059/comments | https://api.github.com/repos/huggingface/datasets/issues/2059/events | https://github.com/huggingface/datasets/issues/2059 | 832,579,156 | MDU6SXNzdWU4MzI1NzkxNTY= | 2,059 | Error while following docs to load the `ted_talks_iwslt` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ekdnam",
"id": 40426312,
"login": "ekdnam",
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ekdnam"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 2 | "2021-03-16T09:12:19Z" | "2021-03-16T18:00:31Z" | "2021-03-16T18:00:07Z" | NONE | null | null | null | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2059/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2058/comments | https://api.github.com/repos/huggingface/datasets/issues/2058/events | https://github.com/huggingface/datasets/issues/2058 | 832,159,844 | MDU6SXNzdWU4MzIxNTk4NDQ= | 2,058 | Is it possible to convert a `tfds` to HuggingFace `dataset`? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94"
} | [] | closed | false | null | [] | null | 1 | "2021-03-15T20:18:47Z" | "2023-07-25T16:47:40Z" | "2023-07-25T16:47:40Z" | CONTRIBUTOR | null | null | null | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.
Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2058/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2057/comments | https://api.github.com/repos/huggingface/datasets/issues/2057/events | https://github.com/huggingface/datasets/pull/2057 | 832,120,522 | MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0 | 2,057 | update link to ZEST dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4",
"events_url": "https://api.github.com/users/matt-peters/events{/privacy}",
"followers_url": "https://api.github.com/users/matt-peters/followers",
"following_url": "https://api.github.com/users/matt-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/matt-peters",
"id": 619844,
"login": "matt-peters",
"node_id": "MDQ6VXNlcjYxOTg0NA==",
"organizations_url": "https://api.github.com/users/matt-peters/orgs",
"received_events_url": "https://api.github.com/users/matt-peters/received_events",
"repos_url": "https://api.github.com/users/matt-peters/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions",
"type": "User",
"url": "https://api.github.com/users/matt-peters"
} | [] | closed | false | null | [] | null | 0 | "2021-03-15T19:22:57Z" | "2021-03-16T17:06:28Z" | "2021-03-16T17:06:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"merged_at": "2021-03-16T17:06:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057"
} | Updating the link as the original one is no longer working. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2057/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2056/comments | https://api.github.com/repos/huggingface/datasets/issues/2056/events | https://github.com/huggingface/datasets/issues/2056 | 831,718,397 | MDU6SXNzdWU4MzE3MTgzOTc= | 2,056 | issue with opus100/en-fr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | 3 | "2021-03-15T11:32:42Z" | "2021-03-16T15:49:00Z" | "2021-03-16T15:48:59Z" | NONE | null | null | null | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2056/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2055/comments | https://api.github.com/repos/huggingface/datasets/issues/2055/events | https://github.com/huggingface/datasets/issues/2055 | 831,684,312 | MDU6SXNzdWU4MzE2ODQzMTI= | 2,055 | is there a way to override a dataset object saved with save_to_disk? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 4 | "2021-03-15T10:50:53Z" | "2021-03-22T04:06:17Z" | "2021-03-22T04:06:17Z" | NONE | null | null | null | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2055/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2054/comments | https://api.github.com/repos/huggingface/datasets/issues/2054/events | https://github.com/huggingface/datasets/issues/2054 | 831,597,665 | MDU6SXNzdWU4MzE1OTc2NjU= | 2,054 | Could not find file for ZEST dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhadreshpsavani",
"id": 26653468,
"login": "bhadreshpsavani",
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhadreshpsavani"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 4 | "2021-03-15T09:11:58Z" | "2021-05-03T09:30:24Z" | "2021-05-03T09:30:24Z" | CONTRIBUTOR | null | null | null | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2054/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2053/comments | https://api.github.com/repos/huggingface/datasets/issues/2053/events | https://github.com/huggingface/datasets/pull/2053 | 831,151,728 | MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2 | 2,053 | Add bAbI QA tasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 7 | "2021-03-14T13:04:39Z" | "2021-03-29T12:41:48Z" | "2021-03-29T12:41:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2053.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2053",
"merged_at": "2021-03-29T12:41:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2053.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2053"
} | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2053/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2052/comments | https://api.github.com/repos/huggingface/datasets/issues/2052/events | https://github.com/huggingface/datasets/issues/2052 | 831,135,704 | MDU6SXNzdWU4MzExMzU3MDQ= | 2,052 | Timit_asr dataset repeats examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fermaat",
"id": 7583522,
"login": "fermaat",
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"repos_url": "https://api.github.com/users/fermaat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fermaat"
} | [] | closed | false | null | [] | null | 2 | "2021-03-14T11:43:43Z" | "2021-03-15T10:37:16Z" | "2021-03-15T10:37:16Z" | NONE | null | null | null | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']
#['Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
```
The same behavior happens for other columns
Expected behavior:
Different info on the actual timit_asr dataset
Actual behavior:
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different
Debug info
Streamlit version: (get it with $ streamlit version)
Python version: Python 3.6.12
Using Conda? PipEnv? PyEnv? Pex? Using pip
OS version: Centos-release-7-9.2009.1.el7.centos.x86_64
Additional information
You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2052/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2051/comments | https://api.github.com/repos/huggingface/datasets/issues/2051/events | https://github.com/huggingface/datasets/pull/2051 | 831,027,021 | MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1 | 2,051 | Add MDD Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 2 | "2021-03-14T00:01:05Z" | "2021-03-19T11:15:44Z" | "2021-03-19T10:31:59Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2051.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2051",
"merged_at": "2021-03-19T10:31:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2051.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2051"
} | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
- **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf)
- **Data:** https://research.fb.com/downloads/babi/
- **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project".
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
**Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2051/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2050/comments | https://api.github.com/repos/huggingface/datasets/issues/2050/events | https://github.com/huggingface/datasets/issues/2050 | 831,006,551 | MDU6SXNzdWU4MzEwMDY1NTE= | 2,050 | Build custom dataset to fine-tune Wav2Vec2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Omarnabk",
"id": 72882909,
"login": "Omarnabk",
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Omarnabk"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 3 | "2021-03-13T22:01:10Z" | "2021-03-15T09:27:28Z" | "2021-03-15T09:27:28Z" | NONE | null | null | null | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2050/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2049/comments | https://api.github.com/repos/huggingface/datasets/issues/2049/events | https://github.com/huggingface/datasets/pull/2049 | 830,978,687 | MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0 | 2,049 | Fix text-classification tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 1 | "2021-03-13T19:51:42Z" | "2021-03-16T15:47:46Z" | "2021-03-16T15:47:46Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2049",
"merged_at": "2021-03-16T15:47:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2049"
} | There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2049/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2048/comments | https://api.github.com/repos/huggingface/datasets/issues/2048/events | https://github.com/huggingface/datasets/issues/2048 | 830,953,431 | MDU6SXNzdWU4MzA5NTM0MzE= | 2,048 | github is not always available - probably need a back up | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | 0 | "2021-03-13T18:03:32Z" | "2022-04-01T15:27:10Z" | "2022-04-01T15:27:10Z" | CONTRIBUTOR | null | null | null | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2048/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2047/comments | https://api.github.com/repos/huggingface/datasets/issues/2047/events | https://github.com/huggingface/datasets/pull/2047 | 830,626,430 | MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3 | 2,047 | Multilingual dIalogAct benchMark (miam) | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eusip",
"id": 1551356,
"login": "eusip",
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"repos_url": "https://api.github.com/users/eusip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eusip"
} | [] | closed | false | null | [] | null | 4 | "2021-03-12T23:02:55Z" | "2021-03-23T10:36:34Z" | "2021-03-19T10:47:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2047.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2047",
"merged_at": "2021-03-19T10:47:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2047.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2047"
} | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2047/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2046/comments | https://api.github.com/repos/huggingface/datasets/issues/2046/events | https://github.com/huggingface/datasets/issues/2046 | 830,423,033 | MDU6SXNzdWU4MzA0MjMwMzM= | 2,046 | add_faisis_index gets very slow when doing it interatively | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | 11 | "2021-03-12T20:27:18Z" | "2021-03-24T22:29:11Z" | "2021-03-24T22:29:11Z" | NONE | null | null | null | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster?
@lhoestq
```
def training_step(self, batch, batch_idx) -> Dict:
if (not batch_idx==0) and (batch_idx%5==0):
print("******************************************************")
ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder
model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU
model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff
list_of_gpus = ['cuda:2','cuda:3']
c_dir='/custom/cache/dir'
kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir)
print(kb_dataset)
n=len(list_of_gpus) #nunber of dedicated GPUs
kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]
#kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir')
print(self.trainer.global_rank)
dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])
output = [None for _ in list_of_gpus]
#self.trainer.accelerator_connector.accelerator.barrier("embedding_process")
dist.all_gather_object(output, dataset_shards)
#This creation and re-initlaization of the new index
if (self.trainer.global_rank==0): #saving will be done in the main process
combined_dataset = concatenate_datasets(output)
passages_path =self.config.passages_path
logger.info("saving the dataset with ")
#combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage')
combined_dataset.save_to_disk(passages_path)
logger.info("Add faiss index to the dataset that consist of embeddings")
embedding_dataset=combined_dataset
index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)
embedding_dataset.add_faiss_index("embeddings", custom_index=index)
embedding_dataset.get_index("embeddings").save(self.config.index_path)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2046/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2045/comments | https://api.github.com/repos/huggingface/datasets/issues/2045/events | https://github.com/huggingface/datasets/pull/2045 | 830,351,527 | MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz | 2,045 | Preserve column ordering in Dataset.rename_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2021-03-12T18:26:47Z" | "2021-03-16T14:48:05Z" | "2021-03-16T14:35:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"merged_at": "2021-03-16T14:35:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045"
} | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2045/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2044/comments | https://api.github.com/repos/huggingface/datasets/issues/2044/events | https://github.com/huggingface/datasets/pull/2044 | 830,339,905 | MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1 | 2,044 | Add CBT dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | 2 | "2021-03-12T18:04:19Z" | "2021-03-19T11:10:13Z" | "2021-03-19T10:29:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2044.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2044",
"merged_at": "2021-03-19T10:29:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2044.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2044"
} | This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space.
Let me know in case of any issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2044/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2043/comments | https://api.github.com/repos/huggingface/datasets/issues/2043/events | https://github.com/huggingface/datasets/pull/2043 | 830,279,098 | MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz | 2,043 | Support pickle protocol for dataset splits defined as ReadInstruction | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2021-03-12T16:35:11Z" | "2021-03-16T14:25:38Z" | "2021-03-16T14:05:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"merged_at": "2021-03-16T14:05:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043"
} | Fixes #2022 (+ some style fixes) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2043/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2042/comments | https://api.github.com/repos/huggingface/datasets/issues/2042/events | https://github.com/huggingface/datasets/pull/2042 | 830,190,276 | MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3 | 2,042 | Fix arrow memory checks issue in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | 0 | "2021-03-12T14:49:52Z" | "2021-03-12T15:04:23Z" | "2021-03-12T15:04:22Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"merged_at": "2021-03-12T15:04:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042"
} | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.
Collecting the garbage collector before checking the arrow memory usage seems to fix this issue.
I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2042/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2041/comments | https://api.github.com/repos/huggingface/datasets/issues/2041/events | https://github.com/huggingface/datasets/pull/2041 | 830,180,803 | MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw | 2,041 | Doc2dial update data_infos and data_loaders | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
} | [] | closed | false | null | [] | null | 0 | "2021-03-12T14:39:29Z" | "2021-03-16T11:09:20Z" | "2021-03-16T11:09:20Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2041",
"merged_at": "2021-03-16T11:09:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2041"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2041/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2040/comments | https://api.github.com/repos/huggingface/datasets/issues/2040/events | https://github.com/huggingface/datasets/issues/2040 | 830,169,387 | MDU6SXNzdWU4MzAxNjkzODc= | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/simonschoe",
"id": 53626067,
"login": "simonschoe",
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/simonschoe"
} | [] | closed | false | null | [] | null | 4 | "2021-03-12T14:27:00Z" | "2021-08-04T18:00:43Z" | "2021-08-04T18:00:43Z" | NONE | null | null | null | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2040/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2039/comments | https://api.github.com/repos/huggingface/datasets/issues/2039/events | https://github.com/huggingface/datasets/pull/2039 | 830,047,652 | MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3 | 2,039 | Doc2dial rc | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
} | [] | closed | false | null | [] | null | 0 | "2021-03-12T11:56:28Z" | "2021-03-12T15:32:36Z" | "2021-03-12T15:32:36Z" | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039"
} | Added fix to handle the last turn that is a user turn. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2039/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/songfeng",
"id": 2062185,
"login": "songfeng",
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"repos_url": "https://api.github.com/users/songfeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/songfeng"
} | [] | closed | false | null | [] | null | 2 | "2021-03-12T11:41:54Z" | "2021-03-16T16:27:40Z" | "2021-03-16T16:27:40Z" | CONTRIBUTOR | null | null | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | completed | false |
Subsets and Splits