url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.31k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1979/comments | https://api.github.com/repos/huggingface/datasets/issues/1979/events | https://github.com/huggingface/datasets/pull/1979 | 820,977,853 | MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3 | 1,979 | Add article_id and process test set template for semeval 2020 task 11… | {
"login": "hemildesai",
"id": 8195444,
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemildesai",
"html_url": "https://github.com/hemildesai",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,767,672,000 | 1,615,633,180,000 | 1,615,554,650,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1979",
"html_url": "https://github.com/huggingface/datasets/pull/1979",
"diff_url": "https://github.com/huggingface/datasets/pull/1979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1979.patch",
"merged_at": 1615554650000
} | … dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1979/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1978/comments | https://api.github.com/repos/huggingface/datasets/issues/1978/events | https://github.com/huggingface/datasets/pull/1978 | 820,956,806 | MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz | 1,978 | Adding ro sts dataset | {
"login": "lorinczb",
"id": 36982089,
"node_id": "MDQ6VXNlcjM2OTgyMDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorinczb",
"html_url": "https://github.com/lorinczb",
"followers_url": "https://api.github.com/users/lorinczb/followers",
"following_url": "https://api.github.com/users/lorinczb/following{/other_user}",
"gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions",
"organizations_url": "https://api.github.com/users/lorinczb/orgs",
"repos_url": "https://api.github.com/users/lorinczb/repos",
"events_url": "https://api.github.com/users/lorinczb/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorinczb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,766,133,000 | 1,614,938,414,000 | 1,614,936,835,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1978",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"merged_at": 1614936835000
} | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1978/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1977/comments | https://api.github.com/repos/huggingface/datasets/issues/1977/events | https://github.com/huggingface/datasets/issues/1977 | 820,312,022 | MDU6SXNzdWU4MjAzMTIwMjI= | 1,977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,712,888,000 | 1,614,766,660,000 | null | NONE | null | null | null | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256
`
I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset?
Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks
I really appreciate your help.
@lhoestq
thanks.
[1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
error I get:
```
>>> import datasets
>>> datasets.load_dataset("wikipedia", "20200501.aa")
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1977/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1976/comments | https://api.github.com/repos/huggingface/datasets/issues/1976/events | https://github.com/huggingface/datasets/pull/1976 | 820,228,538 | MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4 | 1,976 | Add datasets full offline mode with HF_DATASETS_OFFLINE | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,706,019,000 | 1,614,786,331,000 | 1,614,786,330,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1976",
"html_url": "https://github.com/huggingface/datasets/pull/1976",
"diff_url": "https://github.com/huggingface/datasets/pull/1976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1976.patch",
"merged_at": 1614786330000
} | Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1976/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1975/comments | https://api.github.com/repos/huggingface/datasets/issues/1975/events | https://github.com/huggingface/datasets/pull/1975 | 820,205,485 | MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3 | 1,975 | Fix flake8 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,704,353,000 | 1,614,854,602,000 | 1,614,854,602,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1975",
"html_url": "https://github.com/huggingface/datasets/pull/1975",
"diff_url": "https://github.com/huggingface/datasets/pull/1975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1975.patch",
"merged_at": 1614854602000
} | Fix flake8 style. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1975/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1974/comments | https://api.github.com/repos/huggingface/datasets/issues/1974/events | https://github.com/huggingface/datasets/pull/1974 | 820,122,223 | MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0 | 1,974 | feat(docs): navigate with left/right arrow keys | {
"login": "ydcjeff",
"id": 32727188,
"node_id": "MDQ6VXNlcjMyNzI3MTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydcjeff",
"html_url": "https://github.com/ydcjeff",
"followers_url": "https://api.github.com/users/ydcjeff/followers",
"following_url": "https://api.github.com/users/ydcjeff/following{/other_user}",
"gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions",
"organizations_url": "https://api.github.com/users/ydcjeff/orgs",
"repos_url": "https://api.github.com/users/ydcjeff/repos",
"events_url": "https://api.github.com/users/ydcjeff/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydcjeff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,698,690,000 | 1,614,854,652,000 | 1,614,854,568,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1974",
"html_url": "https://github.com/huggingface/datasets/pull/1974",
"diff_url": "https://github.com/huggingface/datasets/pull/1974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1974.patch",
"merged_at": 1614854568000
} | Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1974/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1973/comments | https://api.github.com/repos/huggingface/datasets/issues/1973/events | https://github.com/huggingface/datasets/issues/1973 | 820,077,312 | MDU6SXNzdWU4MjAwNzczMTI= | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,695,753,000 | 1,617,113,039,000 | 1,615,887,840,000 | NONE | null | null | null | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1973/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1972/comments | https://api.github.com/repos/huggingface/datasets/issues/1972/events | https://github.com/huggingface/datasets/issues/1972 | 819,752,761 | MDU6SXNzdWU4MTk3NTI3NjE= | 1,972 | 'Dataset' object has no attribute 'rename_column' | {
"login": "farooqzaman1",
"id": 23195502,
"node_id": "MDQ6VXNlcjIzMTk1NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farooqzaman1",
"html_url": "https://github.com/farooqzaman1",
"followers_url": "https://api.github.com/users/farooqzaman1/followers",
"following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}",
"gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions",
"organizations_url": "https://api.github.com/users/farooqzaman1/orgs",
"repos_url": "https://api.github.com/users/farooqzaman1/repos",
"events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}",
"received_events_url": "https://api.github.com/users/farooqzaman1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,672,109,000 | 1,614,690,483,000 | null | NONE | null | null | null | 'Dataset' object has no attribute 'rename_column' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1972/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1971/comments | https://api.github.com/repos/huggingface/datasets/issues/1971/events | https://github.com/huggingface/datasets/pull/1971 | 819,714,231 | MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0 | 1,971 | Fix ArrowWriter closes stream at exit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,669,154,000 | 1,615,394,217,000 | 1,615,394,217,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1971",
"html_url": "https://github.com/huggingface/datasets/pull/1971",
"diff_url": "https://github.com/huggingface/datasets/pull/1971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1971.patch",
"merged_at": 1615394216000
} | Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method.
Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1971/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1970/comments | https://api.github.com/repos/huggingface/datasets/issues/1970/events | https://github.com/huggingface/datasets/pull/1970 | 819,500,620 | MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw | 1,970 | Fixing the URL filtering for bad MLSUM examples in GEM | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,648,178,000 | 1,614,655,146,000 | 1,614,650,493,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1970",
"html_url": "https://github.com/huggingface/datasets/pull/1970",
"diff_url": "https://github.com/huggingface/datasets/pull/1970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1970.patch",
"merged_at": 1614650493000
} | This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r
cc @sebastianGehrmann | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1970/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1967/comments | https://api.github.com/repos/huggingface/datasets/issues/1967/events | https://github.com/huggingface/datasets/pull/1967 | 819,129,568 | MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx | 1,967 | Add Turkish News Category Dataset - 270K - Lite Version | {
"login": "yavuzKomecoglu",
"id": 5150963,
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yavuzKomecoglu",
"html_url": "https://github.com/yavuzKomecoglu",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,622,919,000 | 1,614,705,900,000 | 1,614,705,900,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1967",
"html_url": "https://github.com/huggingface/datasets/pull/1967",
"diff_url": "https://github.com/huggingface/datasets/pull/1967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1967.patch",
"merged_at": 1614705900000
} | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1967/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1966/comments | https://api.github.com/repos/huggingface/datasets/issues/1966/events | https://github.com/huggingface/datasets/pull/1966 | 819,101,253 | MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0 | 1,966 | Fix metrics collision in separate multiprocessed experiments | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,620,718,000 | 1,614,690,345,000 | 1,614,690,344,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"merged_at": 1614690344000
} | As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1966/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1965/comments | https://api.github.com/repos/huggingface/datasets/issues/1965/events | https://github.com/huggingface/datasets/issues/1965 | 818,833,460 | MDU6SXNzdWU4MTg4MzM0NjA= | 1,965 | Can we parallelized the add_faiss_index process over dataset shards ? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,602,854,000 | 1,614,886,856,000 | 1,614,886,842,000 | NONE | null | null | null | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1965/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1964/comments | https://api.github.com/repos/huggingface/datasets/issues/1964/events | https://github.com/huggingface/datasets/issues/1964 | 818,624,864 | MDU6SXNzdWU4MTg2MjQ4NjQ= | 1,964 | Datasets.py function load_dataset does not match squad dataset | {
"login": "LeopoldACC",
"id": 44536699,
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeopoldACC",
"html_url": "https://github.com/LeopoldACC",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,588,091,000 | 1,614,870,566,000 | null | NONE | null | null | null | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
the bug is that:
```
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 217, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json)
,is the problem plain_text do not have a checksum?
### 2 When I try to train lxmert,and use local dataset:
```
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
The bug is that
```
['title', 'paragraphs']
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 273, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:
```
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
print(datasets["train"].column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
## Please tell me how to fix the bug,thks a lot! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1964/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1963/comments | https://api.github.com/repos/huggingface/datasets/issues/1963/events | https://github.com/huggingface/datasets/issues/1963 | 818,289,967 | MDU6SXNzdWU4MTgyODk5Njc= | 1,963 | bug in SNLI dataset | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,540,980,000 | 1,614,600,089,000 | null | NONE | null | null | null | Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datasets used:
`datasets 1.2.1 <pip>
`
thanks for your help. @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1963/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1962/comments | https://api.github.com/repos/huggingface/datasets/issues/1962/events | https://github.com/huggingface/datasets/pull/1962 | 818,089,156 | MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4 | 1,962 | Fix unused arguments | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,480,427,000 | 1,615,429,097,000 | 1,614,789,470,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"merged_at": 1614789470000
} | Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1962/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1961/comments | https://api.github.com/repos/huggingface/datasets/issues/1961/events | https://github.com/huggingface/datasets/pull/1961 | 818,077,947 | MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0 | 1,961 | Add sst dataset | {
"login": "patpizio",
"id": 15801338,
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patpizio",
"html_url": "https://github.com/patpizio",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"repos_url": "https://api.github.com/users/patpizio/repos",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,478,109,000 | 1,614,854,333,000 | 1,614,854,333,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1961",
"html_url": "https://github.com/huggingface/datasets/pull/1961",
"diff_url": "https://github.com/huggingface/datasets/pull/1961.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1961.patch",
"merged_at": 1614854333000
} | Related to #1934—Add the Stanford Sentiment Treebank dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1961/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1960/comments | https://api.github.com/repos/huggingface/datasets/issues/1960/events | https://github.com/huggingface/datasets/pull/1960 | 818,073,154 | MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4 | 1,960 | Allow stateful function in dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,475,745,000 | 1,616,513,209,000 | 1,616,513,209,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1960",
"html_url": "https://github.com/huggingface/datasets/pull/1960",
"diff_url": "https://github.com/huggingface/datasets/pull/1960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1960.patch",
"merged_at": 1616513209000
} | Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.
Fixes #1940
@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1960/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1959/comments | https://api.github.com/repos/huggingface/datasets/issues/1959/events | https://github.com/huggingface/datasets/issues/1959 | 818,055,644 | MDU6SXNzdWU4MTgwNTU2NDQ= | 1,959 | Bug in skip_rows argument of load_dataset function ? | {
"login": "LedaguenelArthur",
"id": 73159756,
"node_id": "MDQ6VXNlcjczMTU5NzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/73159756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LedaguenelArthur",
"html_url": "https://github.com/LedaguenelArthur",
"followers_url": "https://api.github.com/users/LedaguenelArthur/followers",
"following_url": "https://api.github.com/users/LedaguenelArthur/following{/other_user}",
"gists_url": "https://api.github.com/users/LedaguenelArthur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LedaguenelArthur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LedaguenelArthur/subscriptions",
"organizations_url": "https://api.github.com/users/LedaguenelArthur/orgs",
"repos_url": "https://api.github.com/users/LedaguenelArthur/repos",
"events_url": "https://api.github.com/users/LedaguenelArthur/events{/privacy}",
"received_events_url": "https://api.github.com/users/LedaguenelArthur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,468,774,000 | 1,615,285,292,000 | 1,615,285,292,000 | NONE | null | null | null | Hello everyone,
I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/
I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names
`test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\t', column_names=["id", "sentence", "label"], skip_rows=1)`
But I got the following error message
`__init__() got an unexpected keyword argument 'skip_rows'`
Have I used the wrong argument ? Am I missing something or is this a bug ?
Thank you very much for your time,
Best regards,
Arthur | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1959/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1958/comments | https://api.github.com/repos/huggingface/datasets/issues/1958/events | https://github.com/huggingface/datasets/issues/1958 | 818,037,548 | MDU6SXNzdWU4MTgwMzc1NDg= | 1,958 | XSum dataset download link broken | {
"login": "himat",
"id": 1156974,
"node_id": "MDQ6VXNlcjExNTY5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1156974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/himat",
"html_url": "https://github.com/himat",
"followers_url": "https://api.github.com/users/himat/followers",
"following_url": "https://api.github.com/users/himat/following{/other_user}",
"gists_url": "https://api.github.com/users/himat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/himat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/himat/subscriptions",
"organizations_url": "https://api.github.com/users/himat/orgs",
"repos_url": "https://api.github.com/users/himat/repos",
"events_url": "https://api.github.com/users/himat/events{/privacy}",
"received_events_url": "https://api.github.com/users/himat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,462,476,000 | 1,614,462,616,000 | 1,614,462,616,000 | NONE | null | null | null | I did
```
from datasets import load_dataset
dataset = load_dataset("xsum")
```
This returns
`ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1958/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1957/comments | https://api.github.com/repos/huggingface/datasets/issues/1957/events | https://github.com/huggingface/datasets/issues/1957 | 818,014,624 | MDU6SXNzdWU4MTgwMTQ2MjQ= | 1,957 | [request] make load_metric api intutive | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,458,634,000 | 1,646,783,798,000 | null | MEMBER | null | null | null | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
May I suggest that `num_process` is confusing as it's singular yet expects a plural value and either
* be deprecated in favor of `num_processes` which is more intuitive since it's plural as its expected value
* or even better why not mimic the established dist environment convention for that purpose, which uses `world_size`.
Same for `process_id` - why reinvent the naming and needing to explain that this is **NOT** `PID`, when we have `rank` already. That is:
```
metric = load_metric('glue', 'mrpc', world_size=world_size, rank=rank)
```
This then fits like a glove into the pytorch DDP and alike envs. and we just need to call:
* `dist.get_world_size()`
* `dist.get_rank()`
So it'd be as simple as:
```
metric = load_metric('glue', 'mrpc', world_size=dist.get_world_size(), rank=dist.get_rank())
```
From: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group
* `world_size (int, optional)` – Number of processes participating in the job. Required if store is specified.
* `rank (int, optional)` – Rank of the current process. Required if store is specified.
And may be an example would be useful, so that the user doesn't even need to think about where to get `dist`:
```
import torch.distributed as dist
if dist.is_initialized():
metric = load_metric(metric_name, world_size=dist.get_world_size(), rank=dist.get_rank())
else:
metric = load_metric(metric_name)
```
I'm aware this is pytorch-centric, but it's better than no examples, IMHO.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1957/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1957/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1956/comments | https://api.github.com/repos/huggingface/datasets/issues/1956/events | https://github.com/huggingface/datasets/issues/1956 | 818,013,741 | MDU6SXNzdWU4MTgwMTM3NDE= | 1,956 | [distributed env] potentially unsafe parallel execution | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,458,325,000 | 1,614,619,482,000 | 1,614,619,482,000 | MEMBER | null | null | null | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issues/1942 (but for a different reason).
That's why dist environments use some unique to a group identifier so that each group is dealt with separately.
e.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT`
So ideally this interface should ask for a shared secret to do the right thing.
I'm not reporting an immediate need, but am only flagging that this will hit someone down the road.
This problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment.
Thank you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1956/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1956/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1955/comments | https://api.github.com/repos/huggingface/datasets/issues/1955/events | https://github.com/huggingface/datasets/pull/1955 | 818,010,664 | MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5 | 1,955 | typos + grammar | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,457,303,000 | 1,614,619,238,000 | 1,614,609,799,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1955",
"html_url": "https://github.com/huggingface/datasets/pull/1955",
"diff_url": "https://github.com/huggingface/datasets/pull/1955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1955.patch",
"merged_at": 1614609799000
} | This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability.
N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1955/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1954/comments | https://api.github.com/repos/huggingface/datasets/issues/1954/events | https://github.com/huggingface/datasets/issues/1954 | 817,565,563 | MDU6SXNzdWU4MTc1NjU1NjM= | 1,954 | add a new column | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,363,447,000 | 1,619,707,843,000 | 1,619,707,843,000 | NONE | null | null | null | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1954/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1953/comments | https://api.github.com/repos/huggingface/datasets/issues/1953/events | https://github.com/huggingface/datasets/pull/1953 | 817,498,869 | MDExOlB1bGxSZXF1ZXN0NTgwOTgyMDMz | 1,953 | Documentation for to_csv, to_pandas and to_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,357,349,000 | 1,614,607,428,000 | 1,614,607,427,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1953",
"html_url": "https://github.com/huggingface/datasets/pull/1953",
"diff_url": "https://github.com/huggingface/datasets/pull/1953.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1953.patch",
"merged_at": 1614607427000
} | I added these methods to the documentation with a small paragraph.
I also fixed some formatting issues in the docstrings | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1953/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1953/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1952/comments | https://api.github.com/repos/huggingface/datasets/issues/1952/events | https://github.com/huggingface/datasets/pull/1952 | 817,428,160 | MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw | 1,952 | Handle timeouts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,351,727,000 | 1,614,608,964,000 | 1,614,608,964,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1952",
"html_url": "https://github.com/huggingface/datasets/pull/1952",
"diff_url": "https://github.com/huggingface/datasets/pull/1952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1952.patch",
"merged_at": 1614608964000
} | As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset.
This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00
I added a default timeout, and included an option to our offline environment for tests to be able to simulate both connection errors and timeout errors (previously it was simulating connection errors only).
Now networks calls don't hang indefinitely.
The default timeout is set to 10sec (we might reduce it). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1952/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1951/comments | https://api.github.com/repos/huggingface/datasets/issues/1951/events | https://github.com/huggingface/datasets/pull/1951 | 817,423,573 | MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2 | 1,951 | Add cross-platform support for datasets-cli | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,351,385,000 | 1,615,429,106,000 | 1,614,353,426,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1951",
"html_url": "https://github.com/huggingface/datasets/pull/1951",
"diff_url": "https://github.com/huggingface/datasets/pull/1951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1951.patch",
"merged_at": 1614353426000
} | One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https://stackoverflow.com/a/28119736/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src/datasets/commands/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https://github.com/huggingface/transformers/pull/4131)). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1951/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1950/comments | https://api.github.com/repos/huggingface/datasets/issues/1950/events | https://github.com/huggingface/datasets/pull/1950 | 817,295,235 | MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz | 1,950 | updated multi_nli dataset with missing fields | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,340,476,000 | 1,614,596,910,000 | 1,614,596,909,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1950",
"html_url": "https://github.com/huggingface/datasets/pull/1950",
"diff_url": "https://github.com/huggingface/datasets/pull/1950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1950.patch",
"merged_at": 1614596909000
} | 1) updated fields which were missing earlier
2) added tags to README
3) updated a few fields of README
4) new dataset_infos.json and dummy files | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1950/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1949/comments | https://api.github.com/repos/huggingface/datasets/issues/1949/events | https://github.com/huggingface/datasets/issues/1949 | 816,986,936 | MDU6SXNzdWU4MTY5ODY5MzY= | 1,949 | Enable Fast Filtering using Arrow Dataset | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,308,017,000 | 1,614,367,109,000 | null | CONTRIBUTOR | null | null | null | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-;
Any help would be appreciated.
Thanks,
Gunjan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1949/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1948/comments | https://api.github.com/repos/huggingface/datasets/issues/1948/events | https://github.com/huggingface/datasets/issues/1948 | 816,689,329 | MDU6SXNzdWU4MTY2ODkzMjk= | 1,948 | dataset loading logger level | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,278,017,000 | 1,644,324,439,000 | null | MEMBER | null | null | null | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-ac3bebaf4f91f776.arrow
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-810c3e61259d73a9.arrow
```
why are those WARNINGs? Should be INFO, no?
warnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.
Thank you.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1948/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1947/comments | https://api.github.com/repos/huggingface/datasets/issues/1947/events | https://github.com/huggingface/datasets/pull/1947 | 816,590,299 | MDExOlB1bGxSZXF1ZXN0NTgwMjI2MDk5 | 1,947 | Update documentation with not in place transforms and update DatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,270,198,000 | 1,614,609,414,000 | 1,614,609,413,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1947",
"html_url": "https://github.com/huggingface/datasets/pull/1947",
"diff_url": "https://github.com/huggingface/datasets/pull/1947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1947.patch",
"merged_at": 1614609413000
} | In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`.
I added them to the documentation and added a paragraph on how to use them
You can preview the documentation [here](https://28862-250213286-gh.circle-artifacts.com/0/docs/_build/html/processing.html#renaming-removing-casting-and-flattening-columns)
I also added these methods to the DatasetDict class. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1947/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1947/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1946/comments | https://api.github.com/repos/huggingface/datasets/issues/1946/events | https://github.com/huggingface/datasets/pull/1946 | 816,526,294 | MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2 | 1,946 | Implement Dataset from CSV | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,265,813,000 | 1,615,542,168,000 | 1,615,542,168,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1946",
"html_url": "https://github.com/huggingface/datasets/pull/1946",
"diff_url": "https://github.com/huggingface/datasets/pull/1946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1946.patch",
"merged_at": 1615542168000
} | Implement `Dataset.from_csv`.
Analogue to #1943.
If finally, the scripts should be used instead, at least we can reuse the tests here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1946/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1945/comments | https://api.github.com/repos/huggingface/datasets/issues/1945/events | https://github.com/huggingface/datasets/issues/1945 | 816,421,966 | MDU6SXNzdWU4MTY0MjE5NjY= | 1,945 | AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets' | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,258,585,000 | 1,614,259,235,000 | 1,614,259,226,000 | NONE | null | null | null | Hi
I am trying to concatenate a list of huggingface datastes as:
` train_dataset = datasets.concatenate_datasets(train_datasets)
`
Here is the `train_datasets` when I print:
```
[Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 120361
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2670
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 6944
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 38140
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 173711
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 1655
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 4274
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2019
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 2109
}), Dataset({
features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],
num_rows: 11963
})]
```
I am getting the following error:
`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'
`
I was wondering if you could help me with this issue, thanks a lot | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1945/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1944/comments | https://api.github.com/repos/huggingface/datasets/issues/1944/events | https://github.com/huggingface/datasets/pull/1944 | 816,267,216 | MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3 | 1,944 | Add Turkish News Category Dataset (270K - Lite Version) | {
"login": "yavuzKomecoglu",
"id": 5150963,
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yavuzKomecoglu",
"html_url": "https://github.com/yavuzKomecoglu",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,246,322,000 | 1,614,707,201,000 | 1,614,623,001,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1944",
"html_url": "https://github.com/huggingface/datasets/pull/1944",
"diff_url": "https://github.com/huggingface/datasets/pull/1944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1944.patch",
"merged_at": null
} | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
@SBrandeis @lhoestq, can you please review this PR?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1944/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1943/comments | https://api.github.com/repos/huggingface/datasets/issues/1943/events | https://github.com/huggingface/datasets/pull/1943 | 816,160,453 | MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0 | 1,943 | Implement Dataset from JSON and JSON Lines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,237,453,000 | 1,616,060,528,000 | 1,616,060,528,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1943",
"html_url": "https://github.com/huggingface/datasets/pull/1943",
"diff_url": "https://github.com/huggingface/datasets/pull/1943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1943.patch",
"merged_at": 1616060528000
} | Implement `Dataset.from_jsonl`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1943/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1942/comments | https://api.github.com/repos/huggingface/datasets/issues/1942/events | https://github.com/huggingface/datasets/issues/1942 | 816,037,520 | MDU6SXNzdWU4MTYwMzc1MjA= | 1,942 | [experiment] missing default_experiment-1-0.arrow | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,222,135,000 | 1,614,623,611,000 | null | MEMBER | null | null | null | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files.
w/o the network I get:
```
FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
```
there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock`
I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.
this is with master.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1942/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1941/comments | https://api.github.com/repos/huggingface/datasets/issues/1941/events | https://github.com/huggingface/datasets/issues/1941 | 815,985,167 | MDU6SXNzdWU4MTU5ODUxNjc= | 1,941 | Loading of FAISS index fails for index_name = 'exact' | {
"login": "mkserge",
"id": 2992022,
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkserge",
"html_url": "https://github.com/mkserge",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"repos_url": "https://api.github.com/users/mkserge/repos",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,216,654,000 | 1,614,263,326,000 | 1,614,263,326,000 | CONTRIBUTOR | null | null | null | Hi,
It looks like loading of FAISS index now fails when using index_name = 'exact'.
For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).
Running `transformers==4.3.2` and datasets installed from source on latest `master` branch.
```bash
(venv) sergey_mkrtchyan datasets (master) $ python
Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)
[Clang 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
>>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
>>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4
Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)
0%| | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained
return cls(
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__
self.init_retrieval()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval
self.index.init_index()
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index
self.dataset = load_dataset(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset
datasets = utils.map_nested(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset
post_processed = self._post_process(ds, resources_paths)
File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process
dataset.add_faiss_index("embeddings", custom_index=index)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index
super().add_faiss_index(
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index
faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)
File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors
self.faiss_index.add(vecs)
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add
self.add_c(n, swig_ptr(x))
File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add
return _swigfaiss.IndexHNSW_add(self, n, x)
RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed
>>>
```
The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1941/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1940/comments | https://api.github.com/repos/huggingface/datasets/issues/1940/events | https://github.com/huggingface/datasets/issues/1940 | 815,770,012 | MDU6SXNzdWU4MTU3NzAwMTI= | 1,940 | Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()` | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,194,336,000 | 1,616,513,209,000 | 1,616,513,209,000 | CONTRIBUTOR | null | null | null | Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end:
```python
def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter):
label = int(example['label'])
current_counter = counter.get(label, 0)
if current_counter < per_class_limit:
counter[label] = current_counter + 1
return True
return False
```
At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this:
```python
...
kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()}
datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs)
...
```
The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290).
When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained.
In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call.
I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect.
Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1)
Thanks in advance,
Francisco Perez-Sorrosal
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1940/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1939/comments | https://api.github.com/repos/huggingface/datasets/issues/1939/events | https://github.com/huggingface/datasets/issues/1939 | 815,680,510 | MDU6SXNzdWU4MTU2ODA1MTA= | 1,939 | [firewalled env] OFFLINE mode | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,186,822,000 | 1,614,920,994,000 | 1,614,920,994,000 | MEMBER | null | null | null | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:
```
DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...
```
`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem
```
DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Common Issues
1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided
```
if dataset and path in _PACKAGED_DATASETS_MODULES:
```
2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging.
I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`
Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1939/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1938/comments | https://api.github.com/repos/huggingface/datasets/issues/1938/events | https://github.com/huggingface/datasets/pull/1938 | 815,647,774 | MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw | 1,938 | Disallow ClassLabel with no names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,184,677,000 | 1,614,252,449,000 | 1,614,252,449,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1938",
"html_url": "https://github.com/huggingface/datasets/pull/1938",
"diff_url": "https://github.com/huggingface/datasets/pull/1938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1938.patch",
"merged_at": 1614252449000
} | It was possible to create a ClassLabel without specifying the names or the number of classes.
This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str.
cc @justin-yan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1938/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1937/comments | https://api.github.com/repos/huggingface/datasets/issues/1937/events | https://github.com/huggingface/datasets/issues/1937 | 815,163,943 | MDU6SXNzdWU4MTUxNjM5NDM= | 1,937 | CommonGen dataset page shows an error OSError: [Errno 28] No space left on device | {
"login": "yuchenlin",
"id": 10104354,
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenlin",
"html_url": "https://github.com/yuchenlin",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,149,253,000 | 1,614,337,806,000 | 1,614,337,806,000 | CONTRIBUTOR | null | null | null | The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1937/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1936/comments | https://api.github.com/repos/huggingface/datasets/issues/1936/events | https://github.com/huggingface/datasets/pull/1936 | 814,726,512 | MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4 | 1,936 | [WIP] Adding Support for Reading Pandas Category | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,105,174,000 | 1,646,851,582,000 | 1,646,851,582,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1936",
"html_url": "https://github.com/huggingface/datasets/pull/1936",
"diff_url": "https://github.com/huggingface/datasets/pull/1936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1936.patch",
"merged_at": null
} | @lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014
The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.
Just the 4 line change below actually does seem to work:
```
>>> from datasets import Dataset
>>> import pandas as pd
>>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
>>> ds = Dataset.from_pandas(df)
>>> ds.to_pandas()
0
0 a
1 b
2 c
3 a
>>> ds.to_pandas().dtypes
0 category
dtype: object
```
save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are:
```
>>> ds.features.type
StructType(struct<0: int64>)
```
there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone
Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1936/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1935/comments | https://api.github.com/repos/huggingface/datasets/issues/1935/events | https://github.com/huggingface/datasets/pull/1935 | 814,623,827 | MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1 | 1,935 | add CoVoST2 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,097,696,000 | 1,614,190,172,000 | 1,614,189,909,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1935",
"html_url": "https://github.com/huggingface/datasets/pull/1935",
"diff_url": "https://github.com/huggingface/datasets/pull/1935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1935.patch",
"merged_at": 1614189909000
} | This PR adds the CoVoST2 dataset for speech translation and ASR.
https://github.com/facebookresearch/covost#covost-2
The dataset requires manual download as the download page requests an email address and the URLs are temporary.
The dummy data is a bit bigger because of the mp3 files and 36 configs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1935/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1934/comments | https://api.github.com/repos/huggingface/datasets/issues/1934/events | https://github.com/huggingface/datasets/issues/1934 | 814,437,190 | MDU6SXNzdWU4MTQ0MzcxOTA= | 1,934 | Add Stanford Sentiment Treebank (SST) | {
"login": "patpizio",
"id": 15801338,
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patpizio",
"html_url": "https://github.com/patpizio",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"repos_url": "https://api.github.com/users/patpizio/repos",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,084,796,000 | 1,616,089,904,000 | 1,616,089,904,000 | CONTRIBUTOR | null | null | null | I am going to add SST:
- **Name:** The Stanford Sentiment Treebank
- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- **Data:** https://nlp.stanford.edu/sentiment/index.html
- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification
What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:
- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}
- the labels of the *sub-sentences* were included only in the training set
- the labels in the test set are obfuscated
So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.
Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.
I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1934/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1933/comments | https://api.github.com/repos/huggingface/datasets/issues/1933/events | https://github.com/huggingface/datasets/pull/1933 | 814,335,846 | MDExOlB1bGxSZXF1ZXN0NTc4MzQwMzk3 | 1,933 | Use arrow ipc file format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,076,704,000 | 1,614,076,704,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1933",
"html_url": "https://github.com/huggingface/datasets/pull/1933",
"diff_url": "https://github.com/huggingface/datasets/pull/1933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1933.patch",
"merged_at": null
} | According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample:
> We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.
Since it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https://github.com/huggingface/datasets/issues/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains.
I think it's still a good idea to start using it anyway for these reasons:
- in the future we may have speed gains
- it contains the arrow streaming format data
- it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future
- it's also the format used by arrow feather if we want to use it in the future
- it's roughly the same size as the streaming format
- it's easy to have backward compatibility with the streaming format
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1933/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1932/comments | https://api.github.com/repos/huggingface/datasets/issues/1932/events | https://github.com/huggingface/datasets/pull/1932 | 814,326,116 | MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy | 1,932 | Fix builder config creation with data_dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,075,962,000 | 1,614,077,128,000 | 1,614,077,127,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1932",
"html_url": "https://github.com/huggingface/datasets/pull/1932",
"diff_url": "https://github.com/huggingface/datasets/pull/1932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1932.patch",
"merged_at": 1614077127000
} | The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custom BuilderConfig the same as an available...")`
I fixed that by commenting the line that used to ignore the data_dir when creating the config.
It was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id.
Now creating a config with a data_dir works again @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1932/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | {
"login": "pdufter",
"id": 13961899,
"node_id": "MDQ6VXNlcjEzOTYxODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/13961899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdufter",
"html_url": "https://github.com/pdufter",
"followers_url": "https://api.github.com/users/pdufter/followers",
"following_url": "https://api.github.com/users/pdufter/following{/other_user}",
"gists_url": "https://api.github.com/users/pdufter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdufter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdufter/subscriptions",
"organizations_url": "https://api.github.com/users/pdufter/orgs",
"repos_url": "https://api.github.com/users/pdufter/repos",
"events_url": "https://api.github.com/users/pdufter/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdufter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,067,917,000 | 1,614,592,863,000 | 1,614,592,863,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch",
"merged_at": 1614592863000
} | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,049,660,000 | 1,617,809,096,000 | 1,617,809,096,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch",
"merged_at": 1617809096000
} | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1929/comments | https://api.github.com/repos/huggingface/datasets/issues/1929/events | https://github.com/huggingface/datasets/pull/1929 | 813,929,669 | MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4 | 1,929 | Improve typing and style and fix some inconsistencies | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,034,061,000 | 1,614,183,374,000 | 1,614,175,434,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1929",
"html_url": "https://github.com/huggingface/datasets/pull/1929",
"diff_url": "https://github.com/huggingface/datasets/pull/1929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1929.patch",
"merged_at": 1614175433000
} | This PR:
* improves typing (mostly more consistent use of `typing.Optional`)
* `DatasetDict.cleanup_cache_files` now correctly returns a dict
* replaces `dict()` with the corresponding literal
* uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1929/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1928/comments | https://api.github.com/repos/huggingface/datasets/issues/1928/events | https://github.com/huggingface/datasets/pull/1928 | 813,793,434 | MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4 | 1,928 | Updating old cards | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,021,964,000 | 1,614,104,365,000 | 1,614,104,365,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1928",
"html_url": "https://github.com/huggingface/datasets/pull/1928",
"diff_url": "https://github.com/huggingface/datasets/pull/1928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1928.patch",
"merged_at": 1614104365000
} | Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1928/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1927/comments | https://api.github.com/repos/huggingface/datasets/issues/1927/events | https://github.com/huggingface/datasets/pull/1927 | 813,768,935 | MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5 | 1,927 | Update README.md | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,019,894,000 | 1,614,077,565,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1927",
"html_url": "https://github.com/huggingface/datasets/pull/1927",
"diff_url": "https://github.com/huggingface/datasets/pull/1927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1927.patch",
"merged_at": null
} | Updated the info for the wino_bias dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1927/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1926/comments | https://api.github.com/repos/huggingface/datasets/issues/1926/events | https://github.com/huggingface/datasets/pull/1926 | 813,607,994 | MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy | 1,926 | Fix: Wiki_dpr - add missing scalar quantizer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,007,925,000 | 1,614,008,994,000 | 1,614,008,993,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1926",
"html_url": "https://github.com/huggingface/datasets/pull/1926",
"diff_url": "https://github.com/huggingface/datasets/pull/1926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1926.patch",
"merged_at": 1614008993000
} | All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.
The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.
The quantizer reduces the size of the index a lot but increases index building time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1926/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1925/comments | https://api.github.com/repos/huggingface/datasets/issues/1925/events | https://github.com/huggingface/datasets/pull/1925 | 813,600,902 | MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3 | 1,925 | Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,007,426,000 | 1,614,216,828,000 | 1,614,008,168,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1925",
"html_url": "https://github.com/huggingface/datasets/pull/1925",
"diff_url": "https://github.com/huggingface/datasets/pull/1925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1925.patch",
"merged_at": 1614008167000
} | Fix the bugs noticed in #1915
There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).
Another issue was that setting `index_name="no_index"` didn't set `with_index` to False.
I fixed both of them and added dummy data for those configurations for testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1925/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1924/comments | https://api.github.com/repos/huggingface/datasets/issues/1924/events | https://github.com/huggingface/datasets/issues/1924 | 813,599,733 | MDU6SXNzdWU4MTM1OTk3MzM= | 1,924 | Anonymous Dataset Addition (i.e Anonymous PR?) | {
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,614,007,350,000 | 1,614,104,890,000 | null | CONTRIBUTOR | null | null | null | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1924/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1923/comments | https://api.github.com/repos/huggingface/datasets/issues/1923/events | https://github.com/huggingface/datasets/pull/1923 | 813,363,472 | MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0 | 1,923 | Fix save_to_disk with relative path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,989,639,000 | 1,613,992,964,000 | 1,613,992,963,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1923",
"html_url": "https://github.com/huggingface/datasets/pull/1923",
"diff_url": "https://github.com/huggingface/datasets/pull/1923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1923.patch",
"merged_at": 1613992963000
} | As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step.
I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems.
I also fixed the issue with the target path being the temporary path.
I added a test case for relative paths as well for save_to_disk.
Thanks to @M-Salti for reporting and investigating | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1923/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1922/comments | https://api.github.com/repos/huggingface/datasets/issues/1922/events | https://github.com/huggingface/datasets/issues/1922 | 813,140,806 | MDU6SXNzdWU4MTMxNDA4MDY= | 1,922 | How to update the "wino_bias" dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,972,379,000 | 1,613,990,159,000 | null | CONTRIBUTOR | null | null | null | Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1922/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1921/comments | https://api.github.com/repos/huggingface/datasets/issues/1921/events | https://github.com/huggingface/datasets/pull/1921 | 812,716,042 | MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4 | 1,921 | Standardizing datasets dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,858,641,000 | 1,613,987,050,000 | 1,613,987,050,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1921",
"html_url": "https://github.com/huggingface/datasets/pull/1921",
"diff_url": "https://github.com/huggingface/datasets/pull/1921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1921.patch",
"merged_at": 1613987050000
} | This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets.
This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1921/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,830,959,000 | 1,613,989,811,000 | 1,613,989,811,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch",
"merged_at": null
} | Fixes #1919
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,830,690,000 | 1,614,793,227,000 | 1,614,793,227,000 | CONTRIBUTOR | null | null | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1918/comments | https://api.github.com/repos/huggingface/datasets/issues/1918/events | https://github.com/huggingface/datasets/pull/1918 | 812,541,510 | MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0 | 1,918 | Fix QA4MRE download URLs | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,806,337,000 | 1,614,000,906,000 | 1,614,000,906,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1918",
"html_url": "https://github.com/huggingface/datasets/pull/1918",
"diff_url": "https://github.com/huggingface/datasets/pull/1918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1918.patch",
"merged_at": 1614000906000
} | The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1918/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1917/comments | https://api.github.com/repos/huggingface/datasets/issues/1917/events | https://github.com/huggingface/datasets/issues/1917 | 812,390,178 | MDU6SXNzdWU4MTIzOTAxNzg= | 1,917 | UnicodeDecodeError: windows 10 machine | {
"login": "yosiasz",
"id": 900951,
"node_id": "MDQ6VXNlcjkwMDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yosiasz",
"html_url": "https://github.com/yosiasz",
"followers_url": "https://api.github.com/users/yosiasz/followers",
"following_url": "https://api.github.com/users/yosiasz/following{/other_user}",
"gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions",
"organizations_url": "https://api.github.com/users/yosiasz/orgs",
"repos_url": "https://api.github.com/users/yosiasz/repos",
"events_url": "https://api.github.com/users/yosiasz/events{/privacy}",
"received_events_url": "https://api.github.com/users/yosiasz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,772,785,000 | 1,613,774,471,000 | 1,613,774,428,000 | NONE | null | null | null | Windows 10
Php 3.6.8
when running
```
import datasets
oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am")
print(oscar_am["train"][0])
```
I get the following error
```
file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined>
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1917/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,764,285,000 | 1,614,005,816,000 | 1,614,000,769,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch",
"merged_at": 1614000769000
} | Remove unused/unnecessary py_utils functions/classes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | {
"login": "nitarakad",
"id": 18504534,
"node_id": "MDQ6VXNlcjE4NTA0NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitarakad",
"html_url": "https://github.com/nitarakad",
"followers_url": "https://api.github.com/users/nitarakad/followers",
"following_url": "https://api.github.com/users/nitarakad/following{/other_user}",
"gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions",
"organizations_url": "https://api.github.com/users/nitarakad/orgs",
"repos_url": "https://api.github.com/users/nitarakad/repos",
"events_url": "https://api.github.com/users/nitarakad/events{/privacy}",
"received_events_url": "https://api.github.com/users/nitarakad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,758,292,000 | 1,614,793,248,000 | 1,614,793,248,000 | NONE | null | null | null | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`
I tried adding in flags `with_embeddings=False` and `with_index=False`:
`curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")`
But I got the following error:
`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}`
Is there anything else I need to set to download the dataset?
**UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,751,154,000 | 1,613,936,883,000 | 1,613,936,883,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"merged_at": 1613936883000
} | Fix library relative logging imports and make all datasets use library logger. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1913/comments | https://api.github.com/repos/huggingface/datasets/issues/1913/events | https://github.com/huggingface/datasets/pull/1913 | 812,127,307 | MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw | 1,913 | Add keep_linebreaks parameter to text loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,749,425,000 | 1,613,759,772,000 | 1,613,759,771,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1913",
"html_url": "https://github.com/huggingface/datasets/pull/1913",
"diff_url": "https://github.com/huggingface/datasets/pull/1913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1913.patch",
"merged_at": 1613759771000
} | As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset.
cc @sgugger @jncasey | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1913/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/events | https://github.com/huggingface/datasets/pull/1912 | 812,034,140 | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | 1,912 | Update: WMT - use mirror links | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,742,154,000 | 1,614,174,293,000 | 1,614,174,293,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1912",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch",
"merged_at": 1614174293000
} | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/events | https://github.com/huggingface/datasets/issues/1911 | 812,009,956 | MDU6SXNzdWU4MTIwMDk5NTY= | 1,911 | Saving processed dataset running infinitely | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions",
"organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs",
"repos_url": "https://api.github.com/users/ayubSubhaniya/repos",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,740,159,000 | 1,614,065,684,000 | null | NONE | null | null | null | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796)
```dataset._data = dataset._data.filter(...)```
It took 1 hr for the filter.
Then i use `save_to_disk()` on processed dataset and it is running forever.
I have been waiting since 8 hrs, it has not written a single byte.
Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`.
Second process is the one.
<img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png">
I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | {
"login": "ZihanWangKi",
"id": 21319243,
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihanWangKi",
"html_url": "https://github.com/ZihanWangKi",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,711,550,000 | 1,614,895,367,000 | 1,614,895,367,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/events | https://github.com/huggingface/datasets/issues/1907 | 811,520,569 | MDU6SXNzdWU4MTE1MjA1Njk= | 1,907 | DBPedia14 Dataset Checksum bug? | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,687,148,000 | 1,614,036,125,000 | 1,614,036,124,000 | CONTRIBUTOR | null | null | null | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, in <module>
main()
File "./conditional_classification/basic_pipeline.py", line 128, in main
corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class,
File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data
datasets = load_dataset(self.name, split=dataset_split)
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset
builder_instance.download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare
self._download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare
verify_checksums(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']
```
I've seen this has happened before in other datasets as reported in #537.
I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days.
Can you please check if there's a problem with the checksums?
Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/events | https://github.com/huggingface/datasets/issues/1906 | 811,405,274 | MDU6SXNzdWU4MTE0MDUyNzQ= | 1,906 | Feature Request: Support for Pandas `Categorical` | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,677,565,000 | 1,614,091,130,000 | null | CONTRIBUTOR | null | null | null | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
```
I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`?
e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept:
```
index_type = generate_from_arrow_type(pa_type.index_type)
value_type = generate_from_arrow_type(pa_type.value_type)
```
and then additional code points to modify:
- FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694
- A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719
- I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755
- Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775
I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1905/comments | https://api.github.com/repos/huggingface/datasets/issues/1905/events | https://github.com/huggingface/datasets/pull/1905 | 811,384,174 | MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1 | 1,905 | Standardizing datasets.dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,675,731,000 | 1,613,858,490,000 | 1,613,858,490,000 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1905",
"html_url": "https://github.com/huggingface/datasets/pull/1905",
"diff_url": "https://github.com/huggingface/datasets/pull/1905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1905.patch",
"merged_at": null
} | This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).
This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1905/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,665,846,000 | 1,613,668,203,000 | 1,613,668,201,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"merged_at": 1613668200000
} | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/events | https://github.com/huggingface/datasets/pull/1903 | 811,145,531 | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | 1,903 | Initial commit for the addition of TIMIT dataset | {
"login": "vrindaprabhu",
"id": 16264631,
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrindaprabhu",
"html_url": "https://github.com/vrindaprabhu",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions",
"organizations_url": "https://api.github.com/users/vrindaprabhu/orgs",
"repos_url": "https://api.github.com/users/vrindaprabhu/repos",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vrindaprabhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,658,192,000 | 1,614,591,552,000 | 1,614,591,552,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1903",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch",
"merged_at": 1614591552000
} | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten Requesting your comments, will be happy to address them! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1902/comments | https://api.github.com/repos/huggingface/datasets/issues/1902/events | https://github.com/huggingface/datasets/pull/1902 | 810,931,171 | MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1 | 1,902 | Fix setimes_2 wmt urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,641,346,000 | 1,613,642,141,000 | 1,613,642,141,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1902",
"html_url": "https://github.com/huggingface/datasets/pull/1902",
"diff_url": "https://github.com/huggingface/datasets/pull/1902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1902.patch",
"merged_at": 1613642141000
} | Continuation of #1901
Some other urls were missing https | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1902/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/events | https://github.com/huggingface/datasets/pull/1901 | 810,845,605 | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | 1,901 | Fix OPUS dataset download errors | {
"login": "YangWang92",
"id": 3883941,
"node_id": "MDQ6VXNlcjM4ODM5NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangWang92",
"html_url": "https://github.com/YangWang92",
"followers_url": "https://api.github.com/users/YangWang92/followers",
"following_url": "https://api.github.com/users/YangWang92/following{/other_user}",
"gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions",
"organizations_url": "https://api.github.com/users/YangWang92/orgs",
"repos_url": "https://api.github.com/users/YangWang92/repos",
"events_url": "https://api.github.com/users/YangWang92/events{/privacy}",
"received_events_url": "https://api.github.com/users/YangWang92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,633,981,000 | 1,613,660,840,000 | 1,613,641,161,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1901",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch",
"merged_at": 1613641161000
} | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1900/comments | https://api.github.com/repos/huggingface/datasets/issues/1900/events | https://github.com/huggingface/datasets/pull/1900 | 810,512,488 | MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3 | 1,900 | Issue #1895: Bugfix for string_to_arrow timestamp[ns] support | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,593,564,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1900",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch",
"merged_at": 1613759231000
} | Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant:
```
def __post_init__(self):
if self.dtype == "double": # fix inferred type
self.dtype = "float64"
if self.dtype == "float": # fix inferred type
self.dtype = "float32"
```
However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that.
The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1900/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/events | https://github.com/huggingface/datasets/pull/1899 | 810,308,332 | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | 1,899 | Fix: ALT - fix duplicated examples in alt-parallel | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,577,236,000 | 1,613,582,449,000 | 1,613,582,449,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"merged_at": 1613582449000
} | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/events | https://github.com/huggingface/datasets/issues/1898 | 810,157,251 | MDU6SXNzdWU4MTAxNTcyNTE= | 1,898 | ALT dataset has repeating instances in all splits | {
"login": "10-zin",
"id": 33179372,
"node_id": "MDQ6VXNlcjMzMTc5Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/10-zin",
"html_url": "https://github.com/10-zin",
"followers_url": "https://api.github.com/users/10-zin/followers",
"following_url": "https://api.github.com/users/10-zin/following{/other_user}",
"gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/10-zin/subscriptions",
"organizations_url": "https://api.github.com/users/10-zin/orgs",
"repos_url": "https://api.github.com/users/10-zin/repos",
"events_url": "https://api.github.com/users/10-zin/events{/privacy}",
"received_events_url": "https://api.github.com/users/10-zin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,566,302,000 | 1,613,715,526,000 | 1,613,715,526,000 | NONE | null | null | null | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `explore-datset` feature, for quick reference.

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/events | https://github.com/huggingface/datasets/pull/1897 | 810,113,263 | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | 1,897 | Fix PandasArrayExtensionArray conversion to native type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,562,504,000 | 1,613,567,716,000 | 1,613,567,715,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1897",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"merged_at": 1613567715000
} | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna method was wrong
2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray))
I fixed these two issues and now the conversion to native types works, and so is the export to csv.
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/events | https://github.com/huggingface/datasets/issues/1895 | 809,630,271 | MDU6SXNzdWU4MDk2MzAyNzE= | 1,895 | Bug Report: timestamp[ns] not recognized | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,507,884,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | null | null | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp
It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method.
Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well!
```
$ pip list # only the relevant libraries/versions
datasets 1.2.1
pandas 1.0.3
pyarrow 3.0.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,505,898,000 | 1,613,587,948,000 | null | CONTRIBUTOR | null | null | null | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).
Questions:
1) Is this (basically identical) performance expected?
2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?)
3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks?
Thanks in advance! Sam | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/events | https://github.com/huggingface/datasets/issues/1893 | 809,556,503 | MDU6SXNzdWU4MDk1NTY1MDM= | 1,893 | wmt19 is broken | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,500,798,000 | 1,614,793,322,000 | 1,614,793,322,000 | MEMBER | null | null | null | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested
mapped = [
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested
return function(data_struct)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/events | https://github.com/huggingface/datasets/issues/1892 | 809,554,174 | MDU6SXNzdWU4MDk1NTQxNzQ= | 1,892 | request to mirror wmt datasets, as they are really slow to download | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,500,571,000 | 1,635,231,342,000 | 1,616,673,203,000 | MEMBER | null | null | null | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1891/comments | https://api.github.com/repos/huggingface/datasets/issues/1891/events | https://github.com/huggingface/datasets/issues/1891 | 809,550,001 | MDU6SXNzdWU4MDk1NTAwMDE= | 1,891 | suggestion to improve a missing dataset error | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,500,153,000 | 1,613,500,214,000 | null | MEMBER | null | null | null | I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:
```
True, predict_with_generate=True)
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset
module_path, hash, resolved_file_path = prepare_module(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py.
The file is also not present on the master branch on github.
```
Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there.
The error occured when running:
```
cd examples/seq2seq
export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: "
```
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1891/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1890/comments | https://api.github.com/repos/huggingface/datasets/issues/1890/events | https://github.com/huggingface/datasets/pull/1890 | 809,395,586 | MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx | 1,890 | Reformat dataset cards section titles | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,488,307,000 | 1,613,488,354,000 | 1,613,488,353,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1890",
"html_url": "https://github.com/huggingface/datasets/pull/1890",
"diff_url": "https://github.com/huggingface/datasets/pull/1890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1890.patch",
"merged_at": 1613488353000
} | Titles are formatted like [Foo](#foo) instead of just Foo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1890/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1889/comments | https://api.github.com/repos/huggingface/datasets/issues/1889/events | https://github.com/huggingface/datasets/pull/1889 | 809,276,015 | MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz | 1,889 | Implement to_dict and to_pandas for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,479,099,000 | 1,613,673,757,000 | 1,613,673,754,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1889",
"html_url": "https://github.com/huggingface/datasets/pull/1889",
"diff_url": "https://github.com/huggingface/datasets/pull/1889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1889.patch",
"merged_at": 1613673754000
} | With options to return a generator or the full dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1889/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/events | https://github.com/huggingface/datasets/pull/1888 | 809,241,123 | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | 1,888 | Docs for adding new column on formatted dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,475,900,000 | 1,617,112,863,000 | 1,613,476,737,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1888",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch",
"merged_at": 1613476737000
} | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/events | https://github.com/huggingface/datasets/pull/1887 | 809,229,809 | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | 1,887 | Implement to_csv for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,474,849,000 | 1,613,727,719,000 | 1,613,727,719,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1887",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch",
"merged_at": 1613727719000
} | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1886/comments | https://api.github.com/repos/huggingface/datasets/issues/1886/events | https://github.com/huggingface/datasets/pull/1886 | 809,221,885 | MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz | 1,886 | Common voice | {
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions",
"organizations_url": "https://api.github.com/users/BirgerMoell/orgs",
"repos_url": "https://api.github.com/users/BirgerMoell/repos",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"received_events_url": "https://api.github.com/users/BirgerMoell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,474,170,000 | 1,615,315,891,000 | 1,615,315,891,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1886",
"html_url": "https://github.com/huggingface/datasets/pull/1886",
"diff_url": "https://github.com/huggingface/datasets/pull/1886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1886.patch",
"merged_at": 1615315891000
} | Started filling out information about the dataset and a dataset card.
To do
Create tagging file
Update the common_voice.py file with more information | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1886/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1885/comments | https://api.github.com/repos/huggingface/datasets/issues/1885/events | https://github.com/huggingface/datasets/pull/1885 | 808,881,501 | MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz | 1,885 | add missing info on how to add large files | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,432,799,000 | 1,613,492,539,000 | 1,613,475,852,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1885",
"html_url": "https://github.com/huggingface/datasets/pull/1885",
"diff_url": "https://github.com/huggingface/datasets/pull/1885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1885.patch",
"merged_at": 1613475852000
} | Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1885/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/events | https://github.com/huggingface/datasets/pull/1884 | 808,755,894 | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | 1,884 | dtype fix when using numpy arrays | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,415,325,000 | 1,627,642,878,000 | 1,627,642,878,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"merged_at": null
} | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/events | https://github.com/huggingface/datasets/pull/1883 | 808,750,623 | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | 1,883 | Add not-in-place implementations for several dataset transforms | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,414,666,000 | 1,614,178,489,000 | 1,614,178,406,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1883",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch",
"merged_at": 1614178406000
} | Should we deprecate in-place versions of such methods? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/events | https://github.com/huggingface/datasets/pull/1882 | 808,716,576 | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | 1,882 | Create Remote Manager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,410,584,000 | 1,615,220,110,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch",
"merged_at": null
} | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1881/comments | https://api.github.com/repos/huggingface/datasets/issues/1881/events | https://github.com/huggingface/datasets/pull/1881 | 808,578,200 | MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw | 1,881 | `list_datasets()` returns a list of strings, not objects | {
"login": "pminervini",
"id": 227357,
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pminervini",
"html_url": "https://github.com/pminervini",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"repos_url": "https://api.github.com/users/pminervini/repos",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,398,815,000 | 1,613,401,789,000 | 1,613,401,788,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1881",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch",
"merged_at": 1613401788000
} | Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1881/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1880/comments | https://api.github.com/repos/huggingface/datasets/issues/1880/events | https://github.com/huggingface/datasets/pull/1880 | 808,563,439 | MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0 | 1,880 | Update multi_woz_v22 checksums | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,397,618,000 | 1,613,398,699,000 | 1,613,398,698,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1880",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch",
"merged_at": 1613398698000
} | As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1880/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1879/comments | https://api.github.com/repos/huggingface/datasets/issues/1879/events | https://github.com/huggingface/datasets/pull/1879 | 808,541,442 | MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx | 1,879 | Replace flatten_nested | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,395,780,000 | 1,613,759,714,000 | 1,613,759,714,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1879",
"html_url": "https://github.com/huggingface/datasets/pull/1879",
"diff_url": "https://github.com/huggingface/datasets/pull/1879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1879.patch",
"merged_at": 1613759714000
} | Replace `flatten_nested` with `NestedDataStructure.flatten`.
This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure.
Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.
I have also generalized the flattening, and now it handles multiple levels of nesting. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1879/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1878/comments | https://api.github.com/repos/huggingface/datasets/issues/1878/events | https://github.com/huggingface/datasets/pull/1878 | 808,526,883 | MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3 | 1,878 | Add LJ Speech dataset | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,394,642,000 | 1,613,417,981,000 | 1,613,398,689,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1878",
"html_url": "https://github.com/huggingface/datasets/pull/1878",
"diff_url": "https://github.com/huggingface/datasets/pull/1878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1878.patch",
"merged_at": 1613398689000
} | This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/)
As requested by #1841
The ASR format is based on #1767
There are a couple of quirks that should be addressed:
- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list?
- Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo?
- The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well?
Pinging @patrickvonplaten to review | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1878/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1877/comments | https://api.github.com/repos/huggingface/datasets/issues/1877/events | https://github.com/huggingface/datasets/issues/1877 | 808,462,272 | MDU6SXNzdWU4MDg0NjIyNzI= | 1,877 | Allow concatenation of both in-memory and on-disk datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,389,186,000 | 1,616,777,518,000 | 1,616,777,518,000 | MEMBER | null | null | null | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future
One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table.
Then the dataset would be the concatenation of all these tables.
Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data.
If you have some ideas you would like to share about the design/API feel free to do so :)
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1877/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/events | https://github.com/huggingface/datasets/issues/1876 | 808,025,859 | MDU6SXNzdWU4MDgwMjU4NTk= | 1,876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | {
"login": "Vincent950129",
"id": 5945326,
"node_id": "MDQ6VXNlcjU5NDUzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vincent950129",
"html_url": "https://github.com/Vincent950129",
"followers_url": "https://api.github.com/users/Vincent950129/followers",
"following_url": "https://api.github.com/users/Vincent950129/following{/other_user}",
"gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions",
"organizations_url": "https://api.github.com/users/Vincent950129/orgs",
"repos_url": "https://api.github.com/users/Vincent950129/repos",
"events_url": "https://api.github.com/users/Vincent950129/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vincent950129/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,330,088,000 | 1,628,100,480,000 | 1,628,100,480,000 | NONE | null | null | null | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json']
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/events | https://github.com/huggingface/datasets/pull/1875 | 807,887,267 | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | 1,875 | Adding sari metric | {
"login": "ddhruvkr",
"id": 6061911,
"node_id": "MDQ6VXNlcjYwNjE5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddhruvkr",
"html_url": "https://github.com/ddhruvkr",
"followers_url": "https://api.github.com/users/ddhruvkr/followers",
"following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}",
"gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions",
"organizations_url": "https://api.github.com/users/ddhruvkr/orgs",
"repos_url": "https://api.github.com/users/ddhruvkr/repos",
"events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddhruvkr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,613,277,515,000 | 1,613,577,387,000 | 1,613,577,387,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1875",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch",
"merged_at": 1613577386000
} | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | null | true |
Subsets and Splits