url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.31k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2388/comments | https://api.github.com/repos/huggingface/datasets/issues/2388/events | https://github.com/huggingface/datasets/issues/2388 | 897,767,470 | MDU6SXNzdWU4OTc3Njc0NzA= | 2,388 | Incorrect URLs for some datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,581,755,000 | 1,622,828,385,000 | 1,622,828,385,000 | MEMBER | null | null | null | ## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/
As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# pick one of the datasets from the list above
ds = load_dataset("bn_hate_speech")
```
## Expected results
Dataset loads without error.
## Actual results
```
Downloading: 3.36kB [00:00, 1.07MB/s]
Downloading: 2.03kB [00:00, 678kB/s]
Using custom data configuration default
Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators
train_path = dl_manager.download_and_extract(_URL)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2388/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2387/comments | https://api.github.com/repos/huggingface/datasets/issues/2387/events | https://github.com/huggingface/datasets/issues/2387 | 897,566,666 | MDU6SXNzdWU4OTc1NjY2NjY= | 2,387 | datasets 1.6 ignores cache | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,555,978,000 | 1,622,045,274,000 | 1,622,045,274,000 | MEMBER | null | null | null | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`
>
> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:
> > `{'train': [], 'validation': []}`
>
I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2387/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2386/comments | https://api.github.com/repos/huggingface/datasets/issues/2386/events | https://github.com/huggingface/datasets/issues/2386 | 897,560,049 | MDU6SXNzdWU4OTc1NjAwNDk= | 2,386 | Accessing Arrow dataset cache_files | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,555,063,000 | 1,621,624,683,000 | 1,621,624,683,000 | NONE | null | null | null | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.
Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2386/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2385/comments | https://api.github.com/repos/huggingface/datasets/issues/2385/events | https://github.com/huggingface/datasets/pull/2385 | 897,206,823 | MDExOlB1bGxSZXF1ZXN0NjQ5MjM1Mjcy | 2,385 | update citations | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,533,248,000 | 1,621,600,698,000 | 1,621,600,698,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2385",
"html_url": "https://github.com/huggingface/datasets/pull/2385",
"diff_url": "https://github.com/huggingface/datasets/pull/2385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2385.patch",
"merged_at": 1621600698000
} | To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2385/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2384/comments | https://api.github.com/repos/huggingface/datasets/issues/2384/events | https://github.com/huggingface/datasets/pull/2384 | 896,866,461 | MDExOlB1bGxSZXF1ZXN0NjQ4OTI4NTQ0 | 2,384 | Add args description to DatasetInfo | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,518,790,000 | 1,621,675,576,000 | 1,621,675,574,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2384",
"html_url": "https://github.com/huggingface/datasets/pull/2384",
"diff_url": "https://github.com/huggingface/datasets/pull/2384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2384.patch",
"merged_at": 1621675573000
} | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2384/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2383/comments | https://api.github.com/repos/huggingface/datasets/issues/2383/events | https://github.com/huggingface/datasets/pull/2383 | 895,779,723 | MDExOlB1bGxSZXF1ZXN0NjQ3OTU4MTQ0 | 2,383 | Improve example in rounding docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,450,763,000 | 1,621,601,602,000 | 1,621,600,589,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2383",
"html_url": "https://github.com/huggingface/datasets/pull/2383",
"diff_url": "https://github.com/huggingface/datasets/pull/2383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2383.patch",
"merged_at": 1621600589000
} | Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2383/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2382/comments | https://api.github.com/repos/huggingface/datasets/issues/2382/events | https://github.com/huggingface/datasets/issues/2382 | 895,610,216 | MDU6SXNzdWU4OTU2MTAyMTY= | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | {
"login": "helloworld123-lab",
"id": 75953751,
"node_id": "MDQ6VXNlcjc1OTUzNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloworld123-lab",
"html_url": "https://github.com/helloworld123-lab",
"followers_url": "https://api.github.com/users/helloworld123-lab/followers",
"following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}",
"gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions",
"organizations_url": "https://api.github.com/users/helloworld123-lab/orgs",
"repos_url": "https://api.github.com/users/helloworld123-lab/repos",
"events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}",
"received_events_url": "https://api.github.com/users/helloworld123-lab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,439,388,000 | 1,622,381,176,000 | 1,622,381,176,000 | NONE | null | null | null | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-ea87002d32f0> in <module>()
2 from datasets import load_dataset
3 dataset = load_dataset(
----> 4 'head_qa', 'en')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 1
Keys should be unique and deterministic in nature
```
How can I fix the error? Thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2382/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2381/comments | https://api.github.com/repos/huggingface/datasets/issues/2381/events | https://github.com/huggingface/datasets/pull/2381 | 895,588,844 | MDExOlB1bGxSZXF1ZXN0NjQ3NzkyNDcw | 2,381 | add dataset card title | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,438,203,000 | 1,621,536,700,000 | 1,621,536,700,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2381",
"html_url": "https://github.com/huggingface/datasets/pull/2381",
"diff_url": "https://github.com/huggingface/datasets/pull/2381.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2381.patch",
"merged_at": 1621536700000
} | few of them were missed by me earlier which I've added now | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2381/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2380/comments | https://api.github.com/repos/huggingface/datasets/issues/2380/events | https://github.com/huggingface/datasets/pull/2380 | 895,367,201 | MDExOlB1bGxSZXF1ZXN0NjQ3NTk3NTc3 | 2,380 | maintain YAML structure reading from README | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,426,327,000 | 1,621,429,718,000 | 1,621,429,718,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2380",
"html_url": "https://github.com/huggingface/datasets/pull/2380",
"diff_url": "https://github.com/huggingface/datasets/pull/2380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2380.patch",
"merged_at": 1621429718000
} | How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
```
How YAML is loaded in string now:
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2380/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2379/comments | https://api.github.com/repos/huggingface/datasets/issues/2379/events | https://github.com/huggingface/datasets/pull/2379 | 895,252,597 | MDExOlB1bGxSZXF1ZXN0NjQ3NDk2ODUx | 2,379 | Disallow duplicate keys in yaml tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,419,007,000 | 1,621,421,132,000 | 1,621,421,131,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2379",
"html_url": "https://github.com/huggingface/datasets/pull/2379",
"diff_url": "https://github.com/huggingface/datasets/pull/2379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2379.patch",
"merged_at": 1621421131000
} | Make sure that there's no duplidate keys in yaml tags.
I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure.
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2379/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2378/comments | https://api.github.com/repos/huggingface/datasets/issues/2378/events | https://github.com/huggingface/datasets/issues/2378 | 895,131,774 | MDU6SXNzdWU4OTUxMzE3NzQ= | 2,378 | Add missing dataset_infos.json files | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,411,872,000 | 1,621,411,872,000 | null | MEMBER | null | null | null | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPath('datasets/reclor/reclor.py')]
[PosixPath('datasets/json/README.md')]
[PosixPath('datasets/csv/README.md')]
[PosixPath('datasets/wikihow/wikihow.py'), PosixPath('datasets/wikihow/README.md')]
[PosixPath('datasets/c4/c4.py'), PosixPath('datasets/c4/README.md')]
[PosixPath('datasets/text/README.md')]
[PosixPath('datasets/lm1b/README.md'), PosixPath('datasets/lm1b/lm1b.py')]
[PosixPath('datasets/pandas/README.md')]
```
For `json`, `text`, csv`, and `pandas` this is expected, but not for the others which should be fixed
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2378/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2377/comments | https://api.github.com/repos/huggingface/datasets/issues/2377/events | https://github.com/huggingface/datasets/issues/2377 | 894,918,927 | MDU6SXNzdWU4OTQ5MTg5Mjc= | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | {
"login": "Ark-kun",
"id": 1829149,
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ark-kun",
"html_url": "https://github.com/Ark-kun",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,389,877,000 | 1,622,803,151,000 | null | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arrow')
```
## Expected results
I expect that the saved dataset can be read by the official Apache Arrow methods.
## Actual results
```
File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table
reader.open(source, use_memory_map=memory_map)
File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open
File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-1.6.2
- Platform: Linux
- Python version: 3.7
- PyArrow version: 0.17.1, also 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2377/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2376/comments | https://api.github.com/repos/huggingface/datasets/issues/2376/events | https://github.com/huggingface/datasets/pull/2376 | 894,852,264 | MDExOlB1bGxSZXF1ZXN0NjQ3MTU1NDE4 | 2,376 | Improve task api code quality | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,379,620,000 | 1,622,666,397,000 | 1,621,956,654,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2376",
"html_url": "https://github.com/huggingface/datasets/pull/2376",
"diff_url": "https://github.com/huggingface/datasets/pull/2376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2376.patch",
"merged_at": 1621956654000
} | Improves the code quality of the `TaskTemplate` dataclasses.
Changes:
* replaces `return NotImplemented` with raise `NotImplementedError`
* replaces `sorted` with `len` in the uniqueness check
* defines `label2id` and `id2label` in the `TextClassification` template as properties
* replaces the `object.__setattr__(self, attr, value)` syntax with (IMO nicer) `self.__dict__[attr] = value` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2376/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2375/comments | https://api.github.com/repos/huggingface/datasets/issues/2375/events | https://github.com/huggingface/datasets/pull/2375 | 894,655,157 | MDExOlB1bGxSZXF1ZXN0NjQ2OTg2NTcw | 2,375 | Dataset Streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,362,000,000 | 1,624,466,102,000 | 1,624,466,101,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2375",
"html_url": "https://github.com/huggingface/datasets/pull/2375",
"diff_url": "https://github.com/huggingface/datasets/pull/2375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2375.patch",
"merged_at": 1624466101000
} | # Dataset Streaming
## API
Current API is
```python
from datasets import load_dataset
# Load an IterableDataset without downloading data
snli = load_dataset("snli", streaming=True)
# Access examples by streaming data
print(next(iter(snli["train"])))
# {'premise': 'A person on a horse jumps over a broken down airplane.',
# 'hypothesis': 'A person is training his horse for a competition.',
# 'label': 1}
```
I already implemented a few methods:
- IterableDataset.map: apply transforms on-the-fly to the examples
- IterableDataset.shuffle: shuffle the data _a la_ TFDS, i.e. with a shuffling buffer
- IterableDataset.with_format: set the format to `"torch"` to get a `torch.utils.data.IterableDataset`
- merge_datasets: merge two iterable datasets by alternating one or the other (you can specify the probabilities)
I would love to have your opinion on the API design :)
## Implementation details
### Streaming
Data streaming is done using `fsspec` which has nice caching features.
To make dataset streaming work I extend the `open` function of dataset scripts to support opening remote files without downloading them entirely. It also works with remote compressed archives (currently only zip is supported):
```python
# Get a file-like object by streaming data from a remote file
open("https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt")
# Get a file-like object by streaming data from a remote compressed archive by using the hop separator "::"
open("zip://snli_1.0_train.txt::https://nlp.stanford.edu/projects/snli/snli_1.0.zip")
```
I also extend the `os.path.join` function to support navigation in remote compressed archives, since it has to deal with the `"::"` separator. This separator is used by `fsspec`.
Finally I also added a retry mechanism in case the connection fails during data streaming.
### Transforms
An IterableDataset wraps an ExamplesIterable instance. There are different subclasses depending on the transforms we want to apply:
- ExamplesIterable: the basic one
- MappedExamplesIterable: an iterable with a `map` function applied on the fly
- BufferShuffledExamplesIterable: an iterable with a shuffling buffer
- CyclingMultiSourcesExamplesIterable: alternates between several ExamplesIterable
- RandomlyCyclingMultiSourcesExamplesIterable: randomly alternates between several ExamplesIterable
### DatasetBuilder
I use the same builders as usual. I just added a new method `_get_examples_iterable_for_split` to get an ExamplesIterable for a given split. Currently only the GeneratorBasedBuilder and the ArrowBasedBuilder implement it.
The BeamBasedBuilder doesn't implement it yet.
It means that datasets like wikipedia and natural_questions can't be loaded as IterableDataset for now.
## Other details
<S>I may have to do some changes in many dataset script to use `download` instead of `download_and_extract` when extraction is not needed. This will avoid errors for streaming.</s>
EDIT: Actually I just check for the extension of the file to do extraction only if needed.
EDIT2: It's not possible to stream from .tar.gz files without downloading the file completely. For now I raise an error if one want to get a streaming dataset based on .tar.gz files.
## TODO
usual stuff:
- [x] make streaming dependency "aiohttp" optional: `pip install datasets[streaming]`
- [x] tests
- [x] docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2375/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2375/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2374/comments | https://api.github.com/repos/huggingface/datasets/issues/2374/events | https://github.com/huggingface/datasets/pull/2374 | 894,579,364 | MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw | 2,374 | add `desc` to `tqdm` in `Dataset.map()` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,356,269,000 | 1,622,130,244,000 | 1,622,041,161,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2374",
"html_url": "https://github.com/huggingface/datasets/pull/2374",
"diff_url": "https://github.com/huggingface/datasets/pull/2374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2374.patch",
"merged_at": 1622041161000
} | Fixes #2330. Please let me know if anything is also required in this | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2374/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2374/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2373/comments | https://api.github.com/repos/huggingface/datasets/issues/2373/events | https://github.com/huggingface/datasets/issues/2373 | 894,499,909 | MDU6SXNzdWU4OTQ0OTk5MDk= | 2,373 | Loading dataset from local path | {
"login": "kolakows",
"id": 34172905,
"node_id": "MDQ6VXNlcjM0MTcyOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolakows",
"html_url": "https://github.com/kolakows",
"followers_url": "https://api.github.com/users/kolakows/followers",
"following_url": "https://api.github.com/users/kolakows/following{/other_user}",
"gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolakows/subscriptions",
"organizations_url": "https://api.github.com/users/kolakows/orgs",
"repos_url": "https://api.github.com/users/kolakows/repos",
"events_url": "https://api.github.com/users/kolakows/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolakows/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,351,250,000 | 1,621,352,196,000 | 1,621,352,195,000 | NONE | null | null | null | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly?
https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2373/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2372/comments | https://api.github.com/repos/huggingface/datasets/issues/2372/events | https://github.com/huggingface/datasets/pull/2372 | 894,496,064 | MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2 | 2,372 | ConvQuestions benchmark added | {
"login": "PhilippChr",
"id": 24608689,
"node_id": "MDQ6VXNlcjI0NjA4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/24608689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippChr",
"html_url": "https://github.com/PhilippChr",
"followers_url": "https://api.github.com/users/PhilippChr/followers",
"following_url": "https://api.github.com/users/PhilippChr/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippChr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippChr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippChr/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippChr/orgs",
"repos_url": "https://api.github.com/users/PhilippChr/repos",
"events_url": "https://api.github.com/users/PhilippChr/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippChr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,351,010,000 | 1,622,025,105,000 | 1,622,025,105,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2372",
"html_url": "https://github.com/huggingface/datasets/pull/2372",
"diff_url": "https://github.com/huggingface/datasets/pull/2372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2372.patch",
"merged_at": 1622025105000
} | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2372/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2372/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2371/comments | https://api.github.com/repos/huggingface/datasets/issues/2371/events | https://github.com/huggingface/datasets/issues/2371 | 894,193,403 | MDU6SXNzdWU4OTQxOTM0MDM= | 2,371 | Align question answering tasks with sub-domains | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,331,279,000 | 1,621,331,362,000 | null | MEMBER | null | null | null | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2371/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2370/comments | https://api.github.com/repos/huggingface/datasets/issues/2370/events | https://github.com/huggingface/datasets/pull/2370 | 893,606,432 | MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy | 2,370 | Adding HendrycksTest dataset | {
"login": "andyzoujm",
"id": 43451571,
"node_id": "MDQ6VXNlcjQzNDUxNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyzoujm",
"html_url": "https://github.com/andyzoujm",
"followers_url": "https://api.github.com/users/andyzoujm/followers",
"following_url": "https://api.github.com/users/andyzoujm/following{/other_user}",
"gists_url": "https://api.github.com/users/andyzoujm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyzoujm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyzoujm/subscriptions",
"organizations_url": "https://api.github.com/users/andyzoujm/orgs",
"repos_url": "https://api.github.com/users/andyzoujm/repos",
"events_url": "https://api.github.com/users/andyzoujm/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyzoujm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,277,585,000 | 1,622,479,033,000 | 1,622,479,033,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2370",
"html_url": "https://github.com/huggingface/datasets/pull/2370",
"diff_url": "https://github.com/huggingface/datasets/pull/2370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2370.patch",
"merged_at": 1622479033000
} | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2370/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2369/comments | https://api.github.com/repos/huggingface/datasets/issues/2369/events | https://github.com/huggingface/datasets/pull/2369 | 893,554,153 | MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1 | 2,369 | correct labels of conll2003 | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,273,074,000 | 1,621,326,462,000 | 1,621,326,462,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2369",
"html_url": "https://github.com/huggingface/datasets/pull/2369",
"diff_url": "https://github.com/huggingface/datasets/pull/2369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2369.patch",
"merged_at": 1621326462000
} | # What does this PR
It fixes/extends the `ner_tags` for conll2003 to include all.
Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf
Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2369/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2368/comments | https://api.github.com/repos/huggingface/datasets/issues/2368/events | https://github.com/huggingface/datasets/pull/2368 | 893,411,076 | MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0 | 2,368 | Allow "other-X" in licenses | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,262,874,000 | 1,621,269,387,000 | 1,621,269,387,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2368",
"html_url": "https://github.com/huggingface/datasets/pull/2368",
"diff_url": "https://github.com/huggingface/datasets/pull/2368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2368.patch",
"merged_at": 1621269387000
} | This PR allows "other-X" licenses during metadata validation.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2368/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2367/comments | https://api.github.com/repos/huggingface/datasets/issues/2367/events | https://github.com/huggingface/datasets/pull/2367 | 893,317,427 | MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0 | 2,367 | Remove getchildren from hyperpartisan news detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,257,037,000 | 1,621,260,433,000 | 1,621,260,433,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2367",
"html_url": "https://github.com/huggingface/datasets/pull/2367",
"diff_url": "https://github.com/huggingface/datasets/pull/2367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2367.patch",
"merged_at": 1621260432000
} | `Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).
https://bugs.python.org/issue29209 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2367/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2366/comments | https://api.github.com/repos/huggingface/datasets/issues/2366/events | https://github.com/huggingface/datasets/issues/2366 | 893,185,266 | MDU6SXNzdWU4OTMxODUyNjY= | 2,366 | Json loader fails if user-specified features don't match the json data fields order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,247,168,000 | 1,623,840,469,000 | 1,623,840,469,000 | MEMBER | null | null | null | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']
```
This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.
One way to fix the `cast` would be to replace it with:
```python
# reorder the arrays if necessary + cast to schema
# we can't simply use .cast here because we may need to change the order of the columns
pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2366/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2365/comments | https://api.github.com/repos/huggingface/datasets/issues/2365/events | https://github.com/huggingface/datasets/issues/2365 | 893,179,697 | MDU6SXNzdWU4OTMxNzk2OTc= | 2,365 | Missing ClassLabel encoding in Json loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,246,750,000 | 1,624,892,734,000 | 1,624,892,734,000 | MEMBER | null | null | null | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64
```
This is because it just tries to cast the string data to integers, without applying the mapping str->int first
The current workaround is to do instead
```python
dataset = load_dataset("json", data_files=data_files)
dataset = dataset.map(features.encode_example, features=features)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2365/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2364/comments | https://api.github.com/repos/huggingface/datasets/issues/2364/events | https://github.com/huggingface/datasets/pull/2364 | 892,420,500 | MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx | 2,364 | README updated for SNLI, MNLI | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,078,679,000 | 1,621,260,867,000 | 1,621,258,459,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2364",
"html_url": "https://github.com/huggingface/datasets/pull/2364",
"diff_url": "https://github.com/huggingface/datasets/pull/2364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2364.patch",
"merged_at": 1621258458000
} | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2364/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2363/comments | https://api.github.com/repos/huggingface/datasets/issues/2363/events | https://github.com/huggingface/datasets/issues/2363 | 892,391,232 | MDU6SXNzdWU4OTIzOTEyMzI= | 2,363 | Trying to use metric.compute but get OSError | {
"login": "hyusterr",
"id": 52968111,
"node_id": "MDQ6VXNlcjUyOTY4MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/52968111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyusterr",
"html_url": "https://github.com/hyusterr",
"followers_url": "https://api.github.com/users/hyusterr/followers",
"following_url": "https://api.github.com/users/hyusterr/following{/other_user}",
"gists_url": "https://api.github.com/users/hyusterr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyusterr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyusterr/subscriptions",
"organizations_url": "https://api.github.com/users/hyusterr/orgs",
"repos_url": "https://api.github.com/users/hyusterr/repos",
"events_url": "https://api.github.com/users/hyusterr/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyusterr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,067,946,000 | 1,630,936,866,000 | null | NONE | null | null | null | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch in enumerate(train_loader):
198 # print(batch['input_ids'].shape)
199 outputs = model(**batch)
200
201 loss = outputs.loss
202 loss /= gradient_accumulation_steps
203 accelerator.backward(loss)
204
205 predictions = outputs.logits.argmax(dim=-1)
206 metric.add_batch(
207 predictions=accelerator.gather(predictions),
208 references=accelerator.gather(batch['labels'])
209 )
210 progress_bar.set_postfix({'loss': loss.item(), 'train batch acc.': train_metrics})
211
212 if (step + 1) % 50 == 0 or step == len(train_loader) - 1:
213 train_metrics = metric.compute()
```
the error message is as below:
```
Traceback (most recent call last):
File "run_multi.py", line 273, in <module>
main()
File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/yshuang/.local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "run_multi.py", line 213, in main
train_metrics = metric.compute()
File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py", line 391, in compute
self._finalize()
File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/metric.py", line 342, in _finalize
self.writer.finalize()
File "/home/yshuang/.local/lib/python3.8/site-packages/datasets/arrow_writer.py", line 370, in finalize
self.stream.close()
File "pyarrow/io.pxi", line 132, in pyarrow.lib.NativeFile.close
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: error closing file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.1
- Platform: Linux NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)"
- Python version: python3.8.5
- PyArrow version: 4.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2363/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2362/comments | https://api.github.com/repos/huggingface/datasets/issues/2362/events | https://github.com/huggingface/datasets/pull/2362 | 892,100,749 | MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw | 2,362 | Fix web_nlg metadata | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,012,507,000 | 1,621,259,057,000 | 1,621,258,948,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2362",
"html_url": "https://github.com/huggingface/datasets/pull/2362",
"diff_url": "https://github.com/huggingface/datasets/pull/2362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2362.patch",
"merged_at": null
} | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2362/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2361/comments | https://api.github.com/repos/huggingface/datasets/issues/2361/events | https://github.com/huggingface/datasets/pull/2361 | 891,982,808 | MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,003,523,000 | 1,629,189,004,000 | 1,629,189,004,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2361",
"html_url": "https://github.com/huggingface/datasets/pull/2361",
"diff_url": "https://github.com/huggingface/datasets/pull/2361.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2361.patch",
"merged_at": 1629189004000
} | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2361/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2360/comments | https://api.github.com/repos/huggingface/datasets/issues/2360/events | https://github.com/huggingface/datasets/issues/2360 | 891,965,964 | MDU6SXNzdWU4OTE5NjU5NjQ= | 2,360 | Automatically detect datasets with compatible task schemas | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,002,220,000 | 1,621,002,220,000 | null | MEMBER | null | null | null | See description of #2255 for details.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2360/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2359/comments | https://api.github.com/repos/huggingface/datasets/issues/2359/events | https://github.com/huggingface/datasets/issues/2359 | 891,946,017 | MDU6SXNzdWU4OTE5NDYwMTc= | 2,359 | Allow model labels to be passed during task preparation | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,621,000,708,000 | 1,621,000,708,000 | null | MEMBER | null | null | null | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:
- "1 star", "2 stars", "3 stars", "4 stars", "5 stars"
- "1", "2", "3", "4", "5"
Some models may use the first set, while other models use the second set.
Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?
Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that.
The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.
Let me know what you think ! This can be done in a future PR
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2359/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2358/comments | https://api.github.com/repos/huggingface/datasets/issues/2358/events | https://github.com/huggingface/datasets/pull/2358 | 891,269,577 | MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2 | 2,358 | Roman Urdu Stopwords List | {
"login": "devzohaib",
"id": 58664161,
"node_id": "MDQ6VXNlcjU4NjY0MTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/58664161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devzohaib",
"html_url": "https://github.com/devzohaib",
"followers_url": "https://api.github.com/users/devzohaib/followers",
"following_url": "https://api.github.com/users/devzohaib/following{/other_user}",
"gists_url": "https://api.github.com/users/devzohaib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devzohaib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devzohaib/subscriptions",
"organizations_url": "https://api.github.com/users/devzohaib/orgs",
"repos_url": "https://api.github.com/users/devzohaib/repos",
"events_url": "https://api.github.com/users/devzohaib/events{/privacy}",
"received_events_url": "https://api.github.com/users/devzohaib/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,930,567,000 | 1,621,414,243,000 | 1,621,260,310,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2358",
"html_url": "https://github.com/huggingface/datasets/pull/2358",
"diff_url": "https://github.com/huggingface/datasets/pull/2358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2358.patch",
"merged_at": null
} | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2358/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2357/comments | https://api.github.com/repos/huggingface/datasets/issues/2357/events | https://github.com/huggingface/datasets/pull/2357 | 890,595,693 | MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz | 2,357 | Adding Microsoft CodeXGlue Datasets | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,866,581,000 | 1,623,144,597,000 | 1,623,144,597,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2357",
"html_url": "https://github.com/huggingface/datasets/pull/2357",
"diff_url": "https://github.com/huggingface/datasets/pull/2357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2357.patch",
"merged_at": 1623144597000
} | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2357/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2356/comments | https://api.github.com/repos/huggingface/datasets/issues/2356/events | https://github.com/huggingface/datasets/issues/2356 | 890,511,019 | MDU6SXNzdWU4OTA1MTEwMTk= | 2,356 | How to Add New Metrics Guide | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,855,726,000 | 1,622,486,975,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a similar style to the dataset guide for adding metrics. I believe many of the content in the dataset guide such as setup can be easily copied over with minimal changes. Also, from what I've seen with existing metrics, it shouldn't be as complicated, especially in documentation of the metric, mainly just citation and usage. The most complicated part I see would be in automated tests that run the new metrics, but y'all's test suite seem pretty comprehensive, so it might not be that hard.
**Describe alternatives you've considered**
One alternative would be just not having the metrics be community generated and so would not need a step by step guide. New metrics would just be proposed as issues and the internal team would take care of them. However, I think it makes more sense to have a step by step guide for contributors to follow.
**Additional context**
I'd be happy to help with creating this guide as I am very interested in adding software engineering metrics to the library :nerd_face:, the part I would need guidance on would be testing.
P.S. Love the library and community y'all have built! :hugs:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2356/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2355/comments | https://api.github.com/repos/huggingface/datasets/issues/2355/events | https://github.com/huggingface/datasets/pull/2355 | 890,484,408 | MDExOlB1bGxSZXF1ZXN0NjQzNDk5NTIz | 2,355 | normalized TOCs and titles in data cards | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,853,199,000 | 1,620,998,592,000 | 1,620,998,592,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2355",
"html_url": "https://github.com/huggingface/datasets/pull/2355",
"diff_url": "https://github.com/huggingface/datasets/pull/2355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2355.patch",
"merged_at": 1620998592000
} | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content
This PR normalizes all of them to the newer version | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2355/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2355/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2354/comments | https://api.github.com/repos/huggingface/datasets/issues/2354/events | https://github.com/huggingface/datasets/issues/2354 | 890,439,523 | MDU6SXNzdWU4OTA0Mzk1MjM= | 2,354 | Document DatasetInfo attributes | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,849,689,000 | 1,621,675,574,000 | 1,621,675,574,000 | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2354/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2353/comments | https://api.github.com/repos/huggingface/datasets/issues/2353/events | https://github.com/huggingface/datasets/pull/2353 | 890,296,262 | MDExOlB1bGxSZXF1ZXN0NjQzMzM4MDcz | 2,353 | Update README vallidation rules | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,838,646,000 | 1,620,982,566,000 | 1,620,982,566,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2353",
"html_url": "https://github.com/huggingface/datasets/pull/2353",
"diff_url": "https://github.com/huggingface/datasets/pull/2353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2353.patch",
"merged_at": 1620982566000
} | This PR allows unexpected subsections under third-level headings. All except `Contributions`.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2353/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2352/comments | https://api.github.com/repos/huggingface/datasets/issues/2352/events | https://github.com/huggingface/datasets/pull/2352 | 889,810,100 | MDExOlB1bGxSZXF1ZXN0NjQyOTI4NTgz | 2,352 | Set to_json default to JSON lines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,807,565,000 | 1,621,587,674,000 | 1,621,587,673,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2352",
"html_url": "https://github.com/huggingface/datasets/pull/2352",
"diff_url": "https://github.com/huggingface/datasets/pull/2352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2352.patch",
"merged_at": 1621587673000
} | With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2352/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2352/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2351/comments | https://api.github.com/repos/huggingface/datasets/issues/2351/events | https://github.com/huggingface/datasets/pull/2351 | 889,584,953 | MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz | 2,351 | simpllify faiss index save | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,791,650,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2351",
"html_url": "https://github.com/huggingface/datasets/pull/2351",
"diff_url": "https://github.com/huggingface/datasets/pull/2351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2351.patch",
"merged_at": 1621258901000
} | Fixes #2350
In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU.
In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it.
I propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2351/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2350/comments | https://api.github.com/repos/huggingface/datasets/issues/2350/events | https://github.com/huggingface/datasets/issues/2350 | 889,580,247 | MDU6SXNzdWU4ODk1ODAyNDc= | 2,350 | `FaissIndex.save` throws error on GPU | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,790,916,000 | 1,621,258,901,000 | 1,621,258,901,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2350/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2349/comments | https://api.github.com/repos/huggingface/datasets/issues/2349/events | https://github.com/huggingface/datasets/pull/2349 | 888,586,018 | MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3 | 2,349 | Update task_ids for Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,765,873,000 | 1,621,248,794,000 | 1,621,248,514,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"merged_at": 1621248514000
} | This "other-other-knowledge-base" task is better suited for the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2349/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2348/comments | https://api.github.com/repos/huggingface/datasets/issues/2348/events | https://github.com/huggingface/datasets/pull/2348 | 887,927,737 | MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4 | 2,348 | Add tests for dataset cards | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,753,267,000 | 1,621,599,047,000 | 1,621,599,047,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2348",
"html_url": "https://github.com/huggingface/datasets/pull/2348",
"diff_url": "https://github.com/huggingface/datasets/pull/2348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2348.patch",
"merged_at": 1621599047000
} | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bash
pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist]
```
or a combined test as:
```bash
pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist]
```
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2348/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2347/comments | https://api.github.com/repos/huggingface/datasets/issues/2347/events | https://github.com/huggingface/datasets/issues/2347 | 887,404,868 | MDU6SXNzdWU4ODc0MDQ4Njg= | 2,347 | Add an API to access the language and pretty name of a dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,742,208,000 | 1,621,589,206,000 | null | MEMBER | null | null | null | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2347/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2346/comments | https://api.github.com/repos/huggingface/datasets/issues/2346/events | https://github.com/huggingface/datasets/pull/2346 | 886,632,114 | MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3 | 2,346 | Add Qasper Dataset | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,725,144,000 | 1,621,340,908,000 | 1,621,340,908,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2346",
"html_url": "https://github.com/huggingface/datasets/pull/2346",
"diff_url": "https://github.com/huggingface/datasets/pull/2346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2346.patch",
"merged_at": 1621340907000
} | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP ♻️ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2346/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2345/comments | https://api.github.com/repos/huggingface/datasets/issues/2345/events | https://github.com/huggingface/datasets/issues/2345 | 886,586,872 | MDU6SXNzdWU4ODY1ODY4NzI= | 2,345 | [Question] How to move and reuse preprocessed dataset? | {
"login": "AtmaHou",
"id": 15045402,
"node_id": "MDQ6VXNlcjE1MDQ1NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtmaHou",
"html_url": "https://github.com/AtmaHou",
"followers_url": "https://api.github.com/users/AtmaHou/followers",
"following_url": "https://api.github.com/users/AtmaHou/following{/other_user}",
"gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions",
"organizations_url": "https://api.github.com/users/AtmaHou/orgs",
"repos_url": "https://api.github.com/users/AtmaHou/repos",
"events_url": "https://api.github.com/users/AtmaHou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AtmaHou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,724,157,000 | 1,623,386,351,000 | 1,623,386,351,000 | NONE | null | null | null | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset without loading cache.
I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M.
What is the proper way to do this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2345/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/alexvaca0/followers",
"following_url": "https://api.github.com/users/alexvaca0/following{/other_user}",
"gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions",
"organizations_url": "https://api.github.com/users/alexvaca0/orgs",
"repos_url": "https://api.github.com/users/alexvaca0/repos",
"events_url": "https://api.github.com/users/alexvaca0/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexvaca0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,688,570,000 | 1,620,721,488,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Additional context**
If you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2343/comments | https://api.github.com/repos/huggingface/datasets/issues/2343/events | https://github.com/huggingface/datasets/issues/2343 | 883,208,539 | MDU6SXNzdWU4ODMyMDg1Mzk= | 2,343 | Columns are removed before or after map function applied? | {
"login": "taghizad3h",
"id": 8199406,
"node_id": "MDQ6VXNlcjgxOTk0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taghizad3h",
"html_url": "https://github.com/taghizad3h",
"followers_url": "https://api.github.com/users/taghizad3h/followers",
"following_url": "https://api.github.com/users/taghizad3h/following{/other_user}",
"gists_url": "https://api.github.com/users/taghizad3h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taghizad3h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taghizad3h/subscriptions",
"organizations_url": "https://api.github.com/users/taghizad3h/orgs",
"repos_url": "https://api.github.com/users/taghizad3h/repos",
"events_url": "https://api.github.com/users/taghizad3h/events{/privacy}",
"received_events_url": "https://api.github.com/users/taghizad3h/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,614,180,000 | 1,620,614,180,000 | null | NONE | null | null | null | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2343/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2342/comments | https://api.github.com/repos/huggingface/datasets/issues/2342/events | https://github.com/huggingface/datasets/pull/2342 | 882,981,420 | MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3 | 2,342 | Docs - CER above 1 | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,603,660,000 | 1,620,653,640,000 | 1,620,653,640,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2342",
"html_url": "https://github.com/huggingface/datasets/pull/2342",
"diff_url": "https://github.com/huggingface/datasets/pull/2342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2342.patch",
"merged_at": 1620653640000
} | CER can actually be greater than 1. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2342/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2341/comments | https://api.github.com/repos/huggingface/datasets/issues/2341/events | https://github.com/huggingface/datasets/pull/2341 | 882,370,933 | MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2 | 2,341 | Added the Ascent KB | {
"login": "phongnt570",
"id": 6749421,
"node_id": "MDQ6VXNlcjY3NDk0MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phongnt570",
"html_url": "https://github.com/phongnt570",
"followers_url": "https://api.github.com/users/phongnt570/followers",
"following_url": "https://api.github.com/users/phongnt570/following{/other_user}",
"gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions",
"organizations_url": "https://api.github.com/users/phongnt570/orgs",
"repos_url": "https://api.github.com/users/phongnt570/repos",
"events_url": "https://api.github.com/users/phongnt570/events{/privacy}",
"received_events_url": "https://api.github.com/users/phongnt570/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,569,859,000 | 1,620,724,619,000 | 1,620,724,619,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2341",
"html_url": "https://github.com/huggingface/datasets/pull/2341",
"diff_url": "https://github.com/huggingface/datasets/pull/2341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2341.patch",
"merged_at": 1620724618000
} | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2341/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2340/comments | https://api.github.com/repos/huggingface/datasets/issues/2340/events | https://github.com/huggingface/datasets/pull/2340 | 882,370,824 | MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx | 2,340 | More consistent copy logic | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,569,853,000 | 1,620,723,513,000 | 1,620,723,513,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2340",
"html_url": "https://github.com/huggingface/datasets/pull/2340",
"diff_url": "https://github.com/huggingface/datasets/pull/2340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2340.patch",
"merged_at": 1620723513000
} | Use `info.copy()` instead of `copy.deepcopy(info)`.
`Features.copy` now creates a deep copy. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2340/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2338/comments | https://api.github.com/repos/huggingface/datasets/issues/2338/events | https://github.com/huggingface/datasets/pull/2338 | 882,046,077 | MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx | 2,338 | fixed download link for web_science | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,551,540,000 | 1,620,653,753,000 | 1,620,653,753,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2338",
"html_url": "https://github.com/huggingface/datasets/pull/2338",
"diff_url": "https://github.com/huggingface/datasets/pull/2338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2338.patch",
"merged_at": 1620653753000
} | Fixes #2337. Should work with:
`dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2338/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2337/comments | https://api.github.com/repos/huggingface/datasets/issues/2337/events | https://github.com/huggingface/datasets/issues/2337 | 881,610,567 | MDU6SXNzdWU4ODE2MTA1Njc= | 2,337 | NonMatchingChecksumError for web_of_science dataset | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,525,722,000 | 1,620,653,753,000 | 1,620,653,753,000 | NONE | null | null | null | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results in OSError.
>OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt'
```python
dataset = load_dataset('web_of_science', 'WOS5736')
```
There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'
datasets 1.6.2
python 3.7.10
Ubuntu 18.04.5 LTS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2337/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2336/comments | https://api.github.com/repos/huggingface/datasets/issues/2336/events | https://github.com/huggingface/datasets/pull/2336 | 881,298,783 | MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5 | 2,336 | Fix overflow issue in interpolation search | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,507,096,000 | 1,620,653,347,000 | 1,620,653,172,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2336",
"html_url": "https://github.com/huggingface/datasets/pull/2336",
"diff_url": "https://github.com/huggingface/datasets/pull/2336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2336.patch",
"merged_at": 1620653172000
} | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2336/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2335/comments | https://api.github.com/repos/huggingface/datasets/issues/2335/events | https://github.com/huggingface/datasets/issues/2335 | 881,291,887 | MDU6SXNzdWU4ODEyOTE4ODc= | 2,335 | Index error in Dataset.map | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,506,697,000 | 1,620,653,172,000 | 1,620,653,172,000 | CONTRIBUTOR | null | null | null | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700)
2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll
>>> d.map(lambda ex: ex)
0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars
k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i]))
0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map
new_fingerprint=new_fingerprint,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper
out = func(self, *args, **kwargs)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single
for i, example in enumerate(pbar):
File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__
format_kwargs=format_kwargs,
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table
pa_subtable = _query_table(table, key)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table
return table.fast_slice(key % table.num_rows, 1)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice
i = _interpolation_search(self._offsets, offset)
File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search
raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.")
IndexError: Invalid query '290162' for size 74004228.
```
Tested on Windows, can run on Linux if needed.
EDIT:
It seems like for this to happen, the default NumPy dtype has to be np.int32. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2335/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2334/comments | https://api.github.com/repos/huggingface/datasets/issues/2334/events | https://github.com/huggingface/datasets/pull/2334 | 879,810,107 | MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw | 2,334 | Updating the DART file checksums in GEM | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,424,424,000 | 1,620,425,890,000 | 1,620,425,890,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2334",
"html_url": "https://github.com/huggingface/datasets/pull/2334",
"diff_url": "https://github.com/huggingface/datasets/pull/2334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2334.patch",
"merged_at": 1620425890000
} | The DART files were just updated on the source GitHub
https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2334/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2333/comments | https://api.github.com/repos/huggingface/datasets/issues/2333/events | https://github.com/huggingface/datasets/pull/2333 | 879,214,067 | MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy | 2,333 | Fix duplicate keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,401,288,000 | 1,620,510,451,000 | 1,620,403,028,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"merged_at": 1620403028000
} | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2333/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2332/comments | https://api.github.com/repos/huggingface/datasets/issues/2332/events | https://github.com/huggingface/datasets/pull/2332 | 879,041,608 | MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4 | 2,332 | Add note about indices mapping in save_to_disk docstring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,395,382,000 | 1,620,408,048,000 | 1,620,408,048,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2332",
"html_url": "https://github.com/huggingface/datasets/pull/2332",
"diff_url": "https://github.com/huggingface/datasets/pull/2332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2332.patch",
"merged_at": 1620408048000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2332/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | {
"login": "ktangri",
"id": 22266659,
"node_id": "MDQ6VXNlcjIyMjY2NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktangri",
"html_url": "https://github.com/ktangri",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https://api.github.com/users/ktangri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ktangri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ktangri/subscriptions",
"organizations_url": "https://api.github.com/users/ktangri/orgs",
"repos_url": "https://api.github.com/users/ktangri/repos",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"received_events_url": "https://api.github.com/users/ktangri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,395,039,000 | 1,620,395,039,000 | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2330/comments | https://api.github.com/repos/huggingface/datasets/issues/2330/events | https://github.com/huggingface/datasets/issues/2330 | 878,490,927 | MDU6SXNzdWU4Nzg0OTA5Mjc= | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,366,774,000 | 1,622,041,161,000 | 1,622,041,161,000 | CONTRIBUTOR | null | null | null | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2330/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2329/comments | https://api.github.com/repos/huggingface/datasets/issues/2329/events | https://github.com/huggingface/datasets/pull/2329 | 877,924,198 | MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0 | 2,329 | Add cache dir for in-memory datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,329,732,000 | 1,623,181,608,000 | 1,623,179,206,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"merged_at": null
} | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2329/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2328/comments | https://api.github.com/repos/huggingface/datasets/issues/2328/events | https://github.com/huggingface/datasets/pull/2328 | 877,673,896 | MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2 | 2,328 | Add Matthews/Pearson/Spearman correlation metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,317,367,000 | 1,620,320,290,000 | 1,620,320,290,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2328",
"html_url": "https://github.com/huggingface/datasets/pull/2328",
"diff_url": "https://github.com/huggingface/datasets/pull/2328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2328.patch",
"merged_at": 1620320290000
} | Added three metrics:
- The Matthews correlation coefficient (from sklearn)
- The Pearson correlation coefficient (from scipy)
- The Spearman correlation coefficient (from scipy)
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2328/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2327/comments | https://api.github.com/repos/huggingface/datasets/issues/2327/events | https://github.com/huggingface/datasets/issues/2327 | 877,565,831 | MDU6SXNzdWU4Nzc1NjU4MzE= | 2,327 | A syntax error in example | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,311,684,000 | 1,621,479,859,000 | 1,621,479,859,000 | NONE | null | null | null | 
Sorry to report with an image, I can't find the template source code of this snippet. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2327/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2326/comments | https://api.github.com/repos/huggingface/datasets/issues/2326/events | https://github.com/huggingface/datasets/pull/2326 | 876,829,254 | MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,248,318,000 | 1,620,376,870,000 | 1,620,376,870,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"merged_at": 1620376870000
} | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2326/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2325/comments | https://api.github.com/repos/huggingface/datasets/issues/2325/events | https://github.com/huggingface/datasets/pull/2325 | 876,653,121 | MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx | 2,325 | Added the HLGD dataset | {
"login": "tingofurro",
"id": 2609265,
"node_id": "MDQ6VXNlcjI2MDkyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tingofurro",
"html_url": "https://github.com/tingofurro",
"followers_url": "https://api.github.com/users/tingofurro/followers",
"following_url": "https://api.github.com/users/tingofurro/following{/other_user}",
"gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions",
"organizations_url": "https://api.github.com/users/tingofurro/orgs",
"repos_url": "https://api.github.com/users/tingofurro/repos",
"events_url": "https://api.github.com/users/tingofurro/events{/privacy}",
"received_events_url": "https://api.github.com/users/tingofurro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,233,609,000 | 1,620,831,313,000 | 1,620,828,998,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2325",
"html_url": "https://github.com/huggingface/datasets/pull/2325",
"diff_url": "https://github.com/huggingface/datasets/pull/2325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2325.patch",
"merged_at": 1620828998000
} | Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task
Dataset Link: https://github.com/tingofurro/headline_grouping
Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2325/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2324/comments | https://api.github.com/repos/huggingface/datasets/issues/2324/events | https://github.com/huggingface/datasets/pull/2324 | 876,602,064 | MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz | 2,324 | Create Audio feature | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"id": 6968069,
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"title": "1.12",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 4,
"closed_issues": 2,
"state": "open",
"created_at": 1626881696000,
"updated_at": 1634120793000,
"due_on": 1630306800000,
"closed_at": null
} | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,230,122,000 | 1,634,120,793,000 | 1,634,120,793,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2324",
"html_url": "https://github.com/huggingface/datasets/pull/2324",
"diff_url": "https://github.com/huggingface/datasets/pull/2324.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2324.patch",
"merged_at": 1634120793000
} | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
## Requirements Specification
- Access example with audio loading and resampling:
```python
ds[0]["audio"]
```
- Map with audio loading & resampling:
```python
def preprocess(batch):
batch["input_values"] = processor(batch["audio"]).input_values
return batch
ds = ds.map(preprocess)
```
- Map without audio loading and resampling:
```python
def preprocess(batch):
batch["labels"] = processor(batch["target_text"]).input_values
return batch
ds = ds.map(preprocess)
```
- Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate:
```python
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2324/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2323/comments | https://api.github.com/repos/huggingface/datasets/issues/2323/events | https://github.com/huggingface/datasets/issues/2323 | 876,438,507 | MDU6SXNzdWU4NzY0Mzg1MDc= | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | {
"login": "ekeleshian",
"id": 33647474,
"node_id": "MDQ6VXNlcjMzNjQ3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekeleshian",
"html_url": "https://github.com/ekeleshian",
"followers_url": "https://api.github.com/users/ekeleshian/followers",
"following_url": "https://api.github.com/users/ekeleshian/following{/other_user}",
"gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions",
"organizations_url": "https://api.github.com/users/ekeleshian/orgs",
"repos_url": "https://api.github.com/users/ekeleshian/repos",
"events_url": "https://api.github.com/users/ekeleshian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekeleshian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,220,488,000 | 1,620,383,550,000 | 1,620,383,550,000 | NONE | null | null | null | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times.
I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit = load_dataset("timit_asr")
print(timit['train']['text'])
print(timit['test']['text'])
```
## Expected Result
Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)
<img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png">
## Actual results
Rows of repeated text.
<img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png">
## Versions
- Datasets: 1.3.0
- Python: 3.9.1
- Platform: macOS-11.2.1-x86_64-i386-64bit}
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2323/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2322/comments | https://api.github.com/repos/huggingface/datasets/issues/2322/events | https://github.com/huggingface/datasets/issues/2322 | 876,383,853 | MDU6SXNzdWU4NzYzODM4NTM= | 2,322 | Calls to map are not cached. | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"repos_url": "https://api.github.com/users/villmow/repos",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,216,687,000 | 1,623,179,402,000 | 1,623,179,301,000 | NONE | null | null | null | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])
return samples
# first call
x = sst.map(foo, batched=True, with_indices=True, num_proc=2)
print('\n'*3, "#" * 30, '\n'*3)
# second call
y = sst.map(foo, batched=True, with_indices=True, num_proc=2)
# print version
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
## Actual results
This code prints the following output for me:
```bash
No config specified, defaulting to: sst/default
Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
#0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s]
##############################
#0: 0%| | 0/5 [00:00<?, ?ba/s]
#1: 0%| | 0/5 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]
executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]
executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]
executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]
executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]
#0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s]
executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]
executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]
#1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s]
#0: 0%| | 0/1 [00:00<?, ?ba/s]
#1: 0%| | 0/1 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
#0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s]
executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]
#1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s]
#0: 0%| | 0/2 [00:00<?, ?ba/s]
#1: 0%| | 0/2 [00:00<?, ?ba/s]
executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]
executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]
#0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s]
executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]
#1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s]
- Datasets: 1.6.1
- Python: 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
```
## Expected results
Caching should work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2322/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2321/comments | https://api.github.com/repos/huggingface/datasets/issues/2321/events | https://github.com/huggingface/datasets/pull/2321 | 876,304,364 | MDExOlB1bGxSZXF1ZXN0NjMwNDc3NDUy | 2,321 | Set encoding in OSCAR dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,210,423,000 | 1,620,211,855,000 | 1,620,211,855,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2321",
"html_url": "https://github.com/huggingface/datasets/pull/2321",
"diff_url": "https://github.com/huggingface/datasets/pull/2321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2321.patch",
"merged_at": 1620211854000
} | Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms.
Fix #2319. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2321/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2320/comments | https://api.github.com/repos/huggingface/datasets/issues/2320/events | https://github.com/huggingface/datasets/pull/2320 | 876,257,026 | MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5 | 2,320 | Set default name in init_dynamic_modules | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,207,003,000 | 1,620,287,874,000 | 1,620,287,874,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2320",
"html_url": "https://github.com/huggingface/datasets/pull/2320",
"diff_url": "https://github.com/huggingface/datasets/pull/2320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2320.patch",
"merged_at": 1620287874000
} | Set default value for the name of dynamic modules.
Close #2318. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2320/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,206,572,000 | 1,620,212,251,000 | 1,620,211,855,000 | NONE | null | null | null | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2318/comments | https://api.github.com/repos/huggingface/datasets/issues/2318/events | https://github.com/huggingface/datasets/issues/2318 | 876,212,460 | MDU6SXNzdWU4NzYyMTI0NjA= | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | {
"login": "richardliaw",
"id": 4529381,
"node_id": "MDQ6VXNlcjQ1MjkzODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardliaw",
"html_url": "https://github.com/richardliaw",
"followers_url": "https://api.github.com/users/richardliaw/followers",
"following_url": "https://api.github.com/users/richardliaw/following{/other_user}",
"gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions",
"organizations_url": "https://api.github.com/users/richardliaw/orgs",
"repos_url": "https://api.github.com/users/richardliaw/repos",
"events_url": "https://api.github.com/users/richardliaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardliaw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,204,048,000 | 1,620,290,745,000 | 1,620,287,874,000 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import.
I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
`datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case.
By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently):
https://github.com/huggingface/blog/issues/106
https://github.com/huggingface/transformers/issues/11565
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2318/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2317/comments | https://api.github.com/repos/huggingface/datasets/issues/2317/events | https://github.com/huggingface/datasets/pull/2317 | 875,767,318 | MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4 | 2,317 | Fix incorrect version specification for the pyarrow package | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,156,620,000 | 1,620,209,356,000 | 1,620,206,518,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2317",
"html_url": "https://github.com/huggingface/datasets/pull/2317",
"diff_url": "https://github.com/huggingface/datasets/pull/2317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2317.patch",
"merged_at": 1620206518000
} | This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .
Simply, I put a comma between the version bounds.
Fix #2316. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2317/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2317/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2316/comments | https://api.github.com/repos/huggingface/datasets/issues/2316/events | https://github.com/huggingface/datasets/issues/2316 | 875,756,353 | MDU6SXNzdWU4NzU3NTYzNTM= | 2,316 | Incorrect version specification for pyarrow | {
"login": "cemilcengiz",
"id": 32267027,
"node_id": "MDQ6VXNlcjMyMjY3MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cemilcengiz",
"html_url": "https://github.com/cemilcengiz",
"followers_url": "https://api.github.com/users/cemilcengiz/followers",
"following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}",
"gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions",
"organizations_url": "https://api.github.com/users/cemilcengiz/orgs",
"repos_url": "https://api.github.com/users/cemilcengiz/repos",
"events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/cemilcengiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,155,711,000 | 1,620,209,403,000 | 1,620,209,403,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install "pyarrow>=1.0.0<4.0.0"
```
## Expected results
It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive).
## Actual results
pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0.
This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well:
```bash
conda env export
InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s)
```
## Fix suggestion
Put a comma between the version limits which means replacing the line in setup.py file with the following:
```python
"pyarrow>=1.0.0,<4.0.0",
```
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.2
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2316/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2315/comments | https://api.github.com/repos/huggingface/datasets/issues/2315/events | https://github.com/huggingface/datasets/pull/2315 | 875,742,200 | MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy | 2,315 | Datasets cli improvements | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,154,511,000 | 1,620,664,611,000 | 1,620,664,610,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2315",
"html_url": "https://github.com/huggingface/datasets/pull/2315",
"diff_url": "https://github.com/huggingface/datasets/pull/2315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2315.patch",
"merged_at": 1620664610000
} | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2315/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2314/comments | https://api.github.com/repos/huggingface/datasets/issues/2314/events | https://github.com/huggingface/datasets/pull/2314 | 875,729,271 | MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4 | 2,314 | Minor refactor prepare_module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,153,446,000 | 1,634,116,054,000 | 1,634,116,054,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2314",
"html_url": "https://github.com/huggingface/datasets/pull/2314",
"diff_url": "https://github.com/huggingface/datasets/pull/2314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2314.patch",
"merged_at": null
} | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2314/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2313/comments | https://api.github.com/repos/huggingface/datasets/issues/2313/events | https://github.com/huggingface/datasets/pull/2313 | 875,475,367 | MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4 | 2,313 | Remove unused head_hf_s3 function | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,135,726,000 | 1,620,379,902,000 | 1,620,379,902,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"merged_at": null
} | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2313/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2312/comments | https://api.github.com/repos/huggingface/datasets/issues/2312/events | https://github.com/huggingface/datasets/pull/2312 | 875,435,726 | MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz | 2,312 | Add rename_columnS method | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,133,073,000 | 1,620,135,793,000 | 1,620,135,792,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2312",
"html_url": "https://github.com/huggingface/datasets/pull/2312",
"diff_url": "https://github.com/huggingface/datasets/pull/2312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2312.patch",
"merged_at": 1620135792000
} | Cherry-picked from #2255 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2312/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2311/comments | https://api.github.com/repos/huggingface/datasets/issues/2311/events | https://github.com/huggingface/datasets/pull/2311 | 875,262,208 | MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,119,283,000 | 1,620,381,055,000 | 1,620,381,055,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2311",
"html_url": "https://github.com/huggingface/datasets/pull/2311",
"diff_url": "https://github.com/huggingface/datasets/pull/2311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2311.patch",
"merged_at": 1620381055000
} | Add large speech datasets for Sinhala, Bengali and Nepali. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2311/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2310/comments | https://api.github.com/repos/huggingface/datasets/issues/2310/events | https://github.com/huggingface/datasets/pull/2310 | 875,096,051 | MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5 | 2,310 | Update README.md | {
"login": "cryoff",
"id": 15029054,
"node_id": "MDQ6VXNlcjE1MDI5MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cryoff",
"html_url": "https://github.com/cryoff",
"followers_url": "https://api.github.com/users/cryoff/followers",
"following_url": "https://api.github.com/users/cryoff/following{/other_user}",
"gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cryoff/subscriptions",
"organizations_url": "https://api.github.com/users/cryoff/orgs",
"repos_url": "https://api.github.com/users/cryoff/repos",
"events_url": "https://api.github.com/users/cryoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/cryoff/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,103,081,000 | 1,620,110,159,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2310",
"html_url": "https://github.com/huggingface/datasets/pull/2310",
"diff_url": "https://github.com/huggingface/datasets/pull/2310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2310.patch",
"merged_at": null
} | Provides description of data instances and dataset features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2310/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2309/comments | https://api.github.com/repos/huggingface/datasets/issues/2309/events | https://github.com/huggingface/datasets/pull/2309 | 874,644,990 | MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx | 2,309 | Fix conda release | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,053,579,000 | 1,620,057,677,000 | 1,620,057,677,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2309",
"html_url": "https://github.com/huggingface/datasets/pull/2309",
"diff_url": "https://github.com/huggingface/datasets/pull/2309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2309.patch",
"merged_at": 1620057677000
} | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't supported
- sync the evrsion requirement of `huggingface_hub`
With these changes I'm working on uploading all missing versions until 1.6.2 to conda
EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2309/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2308/comments | https://api.github.com/repos/huggingface/datasets/issues/2308/events | https://github.com/huggingface/datasets/issues/2308 | 874,559,846 | MDU6SXNzdWU4NzQ1NTk4NDY= | 2,308 | Add COCO evaluation metrics | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,620,047,285,000 | 1,622,790,687,000 | null | CONTRIBUTOR | null | null | null | I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py#L22) and [here](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/panoptic_eval.py#L13) respectively).
Running these in a notebook gives you nice summaries like this:

It would be great if we could import these metrics from the Datasets library, something like this:
```
import datasets
metric = datasets.load_metric('coco')
for model_input, gold_references in evaluation_dataset:
model_predictions = model(model_inputs)
metric.add_batch(predictions=model_predictions, references=gold_references)
final_score = metric.compute()
```
I think this would be great for object detection and semantic/panoptic segmentation in general, not just for DETR. Reproducing results of object detection papers would be way easier.
However, object detection and panoptic segmentation evaluation is a bit more complex than accuracy (it's more like a summary of metrics at different thresholds rather than a single one). I'm not sure how to proceed here, but happy to help making this possible.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2308/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2308/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2302/comments | https://api.github.com/repos/huggingface/datasets/issues/2302/events | https://github.com/huggingface/datasets/pull/2302 | 873,961,435 | MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3 | 2,302 | Add SubjQA dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,967,080,000 | 1,620,638,479,000 | 1,620,638,479,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2302",
"html_url": "https://github.com/huggingface/datasets/pull/2302",
"diff_url": "https://github.com/huggingface/datasets/pull/2302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2302.patch",
"merged_at": 1620638479000
} | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2
Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2302/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2301/comments | https://api.github.com/repos/huggingface/datasets/issues/2301/events | https://github.com/huggingface/datasets/issues/2301 | 873,941,266 | MDU6SXNzdWU4NzM5NDEyNjY= | 2,301 | Unable to setup dev env on Windows | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,961,642,000 | 1,620,055,081,000 | 1,620,055,054,000 | CONTRIBUTOR | null | null | null | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5)
Collecting pyarrow>=0.17.1
Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB)
Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1)
Collecting pandas
Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB)
Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0)
Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2)
Collecting multiprocess
Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB)
Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0)
Collecting huggingface_hub<0.1.0
Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1)
Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3)
Collecting pytest-xdist
Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB)
Collecting apache-beam>=2.24.0
Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB)
Collecting elasticsearch
Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB)
Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43)
Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43)
Collecting moto[s3]==1.3.16
Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB)
Collecting rarfile>=4.0
Using cached rarfile-4.0-py3-none-any.whl (28 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB)
Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1)
Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1)
Collecting bs4
Using cached bs4-0.0.1-py3-none-any.whl
Collecting conllu
Using cached conllu-4.4-py2.py3-none-any.whl (15 kB)
Collecting langdetect
Using cached langdetect-1.0.8-py3-none-any.whl
Collecting lxml
Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)
Collecting mwparserfromhell
Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB)
Collecting nltk
Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB)
Collecting openpyxl
Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB)
Collecting py7zr
Using cached py7zr-0.15.2-py3-none-any.whl (66 kB)
Collecting tldextract
Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB)
Collecting zstandard
Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB)
Collecting bert_score>=0.3.6
Using cached bert_score-0.3.9-py3-none-any.whl (59 kB)
Collecting rouge_score
Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)
Collecting sacrebleu
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting sklearn
Using cached sklearn-0.0-py2.py3-none-any.whl
Collecting jiwer
Using cached jiwer-2.2.0-py3-none-any.whl (13 kB)
Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1)
Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3)
Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2)
Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1)
Collecting black
Using cached black-21.4b2-py3-none-any.whl (130 kB)
Collecting isort
Using cached isort-5.8.0-py3-none-any.whl (103 kB)
Collecting flake8==3.7.9
Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1)
Collecting entrypoints<0.4.0,>=0.3.0
Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Collecting pyflakes<2.2.0,>=2.1.0
Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB)
Collecting pycodestyle<2.6.0,>=2.5.0
Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB)
Collecting mccabe<0.7.0,>=0.6.0
Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1)
Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3)
Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0)
Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7)
Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0)
Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1)
Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0)
Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10)
Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1)
Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3)
Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0)
Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125)
Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3)
Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1)
Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0)
Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1)
Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0)
Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0)
Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0)
Collecting hdfs<3.0.0,>=2.1.0
Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB)
Collecting fastavro<2,>=0.21.4
Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB)
Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4)
Collecting pymongo<4.0.0,>=3.8.0
Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB)
Collecting crcmod<2.0,>=1.7
Using cached crcmod-1.7-py3-none-any.whl
Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1
Using cached avro_python3-1.9.2.1-py3-none-any.whl
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3)
Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2)
Collecting oauth2client<5,>=2.0.1
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting pydot<2,>=1.2.0
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8)
Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1)
Collecting matplotlib
Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB)
Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9)
Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32)
Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)
Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1)
Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0)
Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5)
Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20)
Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227)
Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0)
Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2)
Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12)
Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0)
Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2)
Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8)
Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7)
Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0)
Collecting keras-preprocessing~=1.1.2
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0)
Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0)
Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2)
Collecting opt-einsum~=3.3.0
Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting gast==0.3.3
Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Collecting google-pasta~=0.2
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0)
Collecting astunparse~=1.6.3
Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting flatbuffers~=1.12.0
Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting h5py~=2.10.0
Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB)
Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0)
Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4)
Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2)
Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45)
Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9)
Collecting pathspec<1,>=0.8.1
Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB)
Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2)
Collecting appdirs
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting mypy-extensions>=0.4.3
Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)
Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1)
Collecting python-Levenshtein
Using cached python-Levenshtein-0.12.2.tar.gz (50 kB)
Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1)
Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1)
Collecting multiprocess
Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB)
Using cached multiprocess-0.70.10.zip (2.4 MB)
Using cached multiprocess-0.70.9-py3-none-any.whl
Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1)
Collecting et-xmlfile
Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4)
Collecting pyppmd<0.13.0,>=0.12.1
Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB)
Collecting pycryptodome>=3.6.6
Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB)
Collecting bcj-cffi<0.6.0,>=0.5.1
Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB)
Collecting multivolumefile<0.3.0,>=0.2.0
Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB)
Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1)
Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0)
Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1)
Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0)
Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4)
Collecting pytest-forked
Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)
Collecting execnet>=1.1
Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5)
Collecting portalocker==2.0.0
Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0)
Building wheels for collected packages: python-Levenshtein
Building wheel for python-Levenshtein (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Failed building wheel for python-Levenshtein
Running setup.py clean for python-Levenshtein
Failed to build python-Levenshtein
Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam
Running setup.py install for python-Levenshtein ... error
ERROR: Command errored out with exit status 1:
command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein'
cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\
Complete output (27 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein
running egg_info
writing python_Levenshtein.egg-info\PKG-INFO
writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt
writing entry points to python_Levenshtein.egg-info\entry_points.txt
writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt
writing requirements to python_Levenshtein.egg-info\requires.txt
writing top-level names to python_Levenshtein.egg-info\top_level.txt
reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '*so' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt'
copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein
copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein
running build_ext
building 'Levenshtein._levenshtein' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output.
```
Here are conda and python versions:
```bat
(env) C:\testing\datasets>conda --version
conda 4.9.2
(env) C:\testing\datasets>python --version
Python 3.7.10
```
Please help me out. Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2301/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2300/comments | https://api.github.com/repos/huggingface/datasets/issues/2300/events | https://github.com/huggingface/datasets/issues/2300 | 873,928,169 | MDU6SXNzdWU4NzM5MjgxNjk= | 2,300 | Add VoxPopuli | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,957,860,000 | 1,645,038,817,000 | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2300/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2299/comments | https://api.github.com/repos/huggingface/datasets/issues/2299/events | https://github.com/huggingface/datasets/issues/2299 | 873,914,717 | MDU6SXNzdWU4NzM5MTQ3MTc= | 2,299 | My iPhone | {
"login": "Jasonbuchanan1983",
"id": 82856229,
"node_id": "MDQ6VXNlcjgyODU2MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jasonbuchanan1983",
"html_url": "https://github.com/Jasonbuchanan1983",
"followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers",
"following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_user}",
"gists_url": "https://api.github.com/users/Jasonbuchanan1983/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jasonbuchanan1983/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jasonbuchanan1983/subscriptions",
"organizations_url": "https://api.github.com/users/Jasonbuchanan1983/orgs",
"repos_url": "https://api.github.com/users/Jasonbuchanan1983/repos",
"events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jasonbuchanan1983/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,953,871,000 | 1,627,032,256,000 | 1,620,029,858,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2299/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2298/comments | https://api.github.com/repos/huggingface/datasets/issues/2298/events | https://github.com/huggingface/datasets/pull/2298 | 873,771,942 | MDExOlB1bGxSZXF1ZXN0NjI4NDk2NjM2 | 2,298 | Mapping in the distributed setting | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,904,185,000 | 1,620,050,093,000 | 1,620,050,093,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2298",
"html_url": "https://github.com/huggingface/datasets/pull/2298",
"diff_url": "https://github.com/huggingface/datasets/pull/2298.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2298.patch",
"merged_at": 1620050093000
} | The barrier trick for distributed mapping as discussed on Thursday with @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2298/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2296/comments | https://api.github.com/repos/huggingface/datasets/issues/2296/events | https://github.com/huggingface/datasets/issues/2296 | 872,974,907 | MDU6SXNzdWU4NzI5NzQ5MDc= | 2,296 | 1 | {
"login": "zinnyi",
"id": 82880142,
"node_id": "MDQ6VXNlcjgyODgwMTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/82880142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zinnyi",
"html_url": "https://github.com/zinnyi",
"followers_url": "https://api.github.com/users/zinnyi/followers",
"following_url": "https://api.github.com/users/zinnyi/following{/other_user}",
"gists_url": "https://api.github.com/users/zinnyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zinnyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zinnyi/subscriptions",
"organizations_url": "https://api.github.com/users/zinnyi/orgs",
"repos_url": "https://api.github.com/users/zinnyi/repos",
"events_url": "https://api.github.com/users/zinnyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zinnyi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,805,229,000 | 1,620,029,851,000 | 1,620,029,851,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2296/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2295/comments | https://api.github.com/repos/huggingface/datasets/issues/2295/events | https://github.com/huggingface/datasets/pull/2295 | 872,902,867 | MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3 | 2,295 | Create ExtractManager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"id": 6836458,
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"title": "1.10",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 29,
"state": "closed",
"created_at": 1623178113000,
"updated_at": 1626881809000,
"due_on": 1628146800000,
"closed_at": 1626881809000
} | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,802,814,000 | 1,626,099,123,000 | 1,625,731,909,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2295",
"html_url": "https://github.com/huggingface/datasets/pull/2295",
"diff_url": "https://github.com/huggingface/datasets/pull/2295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2295.patch",
"merged_at": 1625731909000
} | Perform refactoring to decouple extract functionality. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2295/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2295/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2294/comments | https://api.github.com/repos/huggingface/datasets/issues/2294/events | https://github.com/huggingface/datasets/issues/2294 | 872,136,075 | MDU6SXNzdWU4NzIxMzYwNzU= | 2,294 | Slow #0 when using map to tokenize. | {
"login": "VerdureChen",
"id": 31714566,
"node_id": "MDQ6VXNlcjMxNzE0NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/31714566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VerdureChen",
"html_url": "https://github.com/VerdureChen",
"followers_url": "https://api.github.com/users/VerdureChen/followers",
"following_url": "https://api.github.com/users/VerdureChen/following{/other_user}",
"gists_url": "https://api.github.com/users/VerdureChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VerdureChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VerdureChen/subscriptions",
"organizations_url": "https://api.github.com/users/VerdureChen/orgs",
"repos_url": "https://api.github.com/users/VerdureChen/repos",
"events_url": "https://api.github.com/users/VerdureChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/VerdureChen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,769,633,000 | 1,620,126,011,000 | null | NONE | null | null | null | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
)` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others.
It looks like this:

It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2294/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2293/comments | https://api.github.com/repos/huggingface/datasets/issues/2293/events | https://github.com/huggingface/datasets/pull/2293 | 872,079,385 | MDExOlB1bGxSZXF1ZXN0NjI3MDQzNzQ3 | 2,293 | imdb dataset from Don't Stop Pretraining Paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,764,848,000 | 1,619,765,665,000 | 1,619,765,665,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2293",
"html_url": "https://github.com/huggingface/datasets/pull/2293",
"diff_url": "https://github.com/huggingface/datasets/pull/2293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2293.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2293/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2292/comments | https://api.github.com/repos/huggingface/datasets/issues/2292/events | https://github.com/huggingface/datasets/pull/2292 | 871,230,183 | MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy | 2,292 | Fixed typo seperate->separate | {
"login": "laksh9950",
"id": 32505743,
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laksh9950",
"html_url": "https://github.com/laksh9950",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,714,453,000 | 1,619,789,358,000 | 1,619,787,792,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"merged_at": 1619787792000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2292/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2291/comments | https://api.github.com/repos/huggingface/datasets/issues/2291/events | https://github.com/huggingface/datasets/pull/2291 | 871,216,757 | MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5 | 2,291 | Don't copy recordbatches in memory during a table deepcopy | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,713,565,000 | 1,619,714,075,000 | 1,619,714,074,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2291",
"html_url": "https://github.com/huggingface/datasets/pull/2291",
"diff_url": "https://github.com/huggingface/datasets/pull/2291.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2291.patch",
"merged_at": 1619714073000
} | Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied).
The issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA.
Thanks @samsontmr , @TaskManager91 and @mariosasko for the help
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2291/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2291/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2290/comments | https://api.github.com/repos/huggingface/datasets/issues/2290/events | https://github.com/huggingface/datasets/pull/2290 | 871,145,817 | MDExOlB1bGxSZXF1ZXN0NjI2MjEyNTIz | 2,290 | Bbaw egyptian | {
"login": "phiwi",
"id": 54144149,
"node_id": "MDQ6VXNlcjU0MTQ0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/54144149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phiwi",
"html_url": "https://github.com/phiwi",
"followers_url": "https://api.github.com/users/phiwi/followers",
"following_url": "https://api.github.com/users/phiwi/following{/other_user}",
"gists_url": "https://api.github.com/users/phiwi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phiwi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phiwi/subscriptions",
"organizations_url": "https://api.github.com/users/phiwi/orgs",
"repos_url": "https://api.github.com/users/phiwi/repos",
"events_url": "https://api.github.com/users/phiwi/events{/privacy}",
"received_events_url": "https://api.github.com/users/phiwi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,710,078,000 | 1,620,321,925,000 | 1,620,321,925,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2290",
"html_url": "https://github.com/huggingface/datasets/pull/2290",
"diff_url": "https://github.com/huggingface/datasets/pull/2290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2290.patch",
"merged_at": 1620321925000
} | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2290/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2290/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2289/comments | https://api.github.com/repos/huggingface/datasets/issues/2289/events | https://github.com/huggingface/datasets/pull/2289 | 871,118,573 | MDExOlB1bGxSZXF1ZXN0NjI2MTg5MDU3 | 2,289 | Allow collaborators to self-assign issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,708,826,000 | 1,619,807,296,000 | 1,619,807,296,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2289",
"html_url": "https://github.com/huggingface/datasets/pull/2289",
"diff_url": "https://github.com/huggingface/datasets/pull/2289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2289.patch",
"merged_at": 1619807296000
} | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2289/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2288/comments | https://api.github.com/repos/huggingface/datasets/issues/2288/events | https://github.com/huggingface/datasets/issues/2288 | 871,111,235 | MDU6SXNzdWU4NzExMTEyMzU= | 2,288 | Load_dataset for local CSV files | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,708,470,000 | 1,623,764,966,000 | 1,623,764,966,000 | NONE | null | null | null | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
```
The method, loads each list as a string: (i.g "['I' , 'am', 'John']").
To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type
```
new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None))
new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags)))
dataset = dataset.cast(new_features)
```
but I got the following error
```
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
```
Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well.
How can this be solved ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2288/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2287/comments | https://api.github.com/repos/huggingface/datasets/issues/2287/events | https://github.com/huggingface/datasets/pull/2287 | 871,063,374 | MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3 | 2,287 | Avoid copying table's record batches | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,705,701,000 | 1,619,714,063,000 | 1,619,714,062,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2287",
"html_url": "https://github.com/huggingface/datasets/pull/2287",
"diff_url": "https://github.com/huggingface/datasets/pull/2287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2287.patch",
"merged_at": null
} | Fixes #2276 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2287/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2286/comments | https://api.github.com/repos/huggingface/datasets/issues/2286/events | https://github.com/huggingface/datasets/pull/2286 | 871,032,393 | MDExOlB1bGxSZXF1ZXN0NjI2MTE5MTE2 | 2,286 | Fix metadata validation with config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,703,872,000 | 1,619,705,249,000 | 1,619,705,248,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2286",
"html_url": "https://github.com/huggingface/datasets/pull/2286",
"diff_url": "https://github.com/huggingface/datasets/pull/2286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2286.patch",
"merged_at": 1619705248000
} | I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2286/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | {
"login": "danieldiezmallo",
"id": 46021411,
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldiezmallo",
"html_url": "https://github.com/danieldiezmallo",
"followers_url": "https://api.github.com/users/danieldiezmallo/followers",
"following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions",
"organizations_url": "https://api.github.com/users/danieldiezmallo/orgs",
"repos_url": "https://api.github.com/users/danieldiezmallo/repos",
"events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}",
"received_events_url": "https://api.github.com/users/danieldiezmallo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,702,205,000 | 1,621,408,965,000 | 1,621,408,959,000 | NONE | null | null | null | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2284/comments | https://api.github.com/repos/huggingface/datasets/issues/2284/events | https://github.com/huggingface/datasets/pull/2284 | 870,932,710 | MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5 | 2,284 | Initialize Imdb dataset as used in Don't Stop Pretraining Paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,697,158,000 | 1,619,700,874,000 | 1,619,700,874,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2284",
"html_url": "https://github.com/huggingface/datasets/pull/2284",
"diff_url": "https://github.com/huggingface/datasets/pull/2284.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2284.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2284/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2283/comments | https://api.github.com/repos/huggingface/datasets/issues/2283/events | https://github.com/huggingface/datasets/pull/2283 | 870,926,475 | MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,696,694,000 | 1,619,697,024,000 | 1,619,697,024,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2283",
"html_url": "https://github.com/huggingface/datasets/pull/2283",
"diff_url": "https://github.com/huggingface/datasets/pull/2283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2283.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2283/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2282/comments | https://api.github.com/repos/huggingface/datasets/issues/2282/events | https://github.com/huggingface/datasets/pull/2282 | 870,900,332 | MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3 | 2,282 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,619,695,076,000 | 1,619,696,631,000 | 1,619,696,631,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2282",
"html_url": "https://github.com/huggingface/datasets/pull/2282",
"diff_url": "https://github.com/huggingface/datasets/pull/2282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2282.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2282/timeline | null | true |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.