url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.31k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1774/comments | https://api.github.com/repos/huggingface/datasets/issues/1774/events | https://github.com/huggingface/datasets/issues/1774 | 792,730,559 | MDU6SXNzdWU3OTI3MzA1NTk= | 1,774 | is it possible to make slice to be more compatible like python list and numpy? | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/world2vec/subscriptions",
"organizations_url": "https://api.github.com/users/world2vec/orgs",
"repos_url": "https://api.github.com/users/world2vec/repos",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/world2vec/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,468,952,000 | 1,611,531,378,000 | null | NONE | null | null | null | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1774/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1773/comments | https://api.github.com/repos/huggingface/datasets/issues/1773/events | https://github.com/huggingface/datasets/issues/1773 | 792,708,160 | MDU6SXNzdWU3OTI3MDgxNjA= | 1,773 | bug in loading datasets | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,456,825,000 | 1,630,918,486,000 | 1,628,100,781,000 | NONE | null | null | null | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
print(dataset['validation'])
```
the dataset in sick/train.csv are simple csv files representing the data. I am getting this error, do you have an idea how I can solve this? thank you @lhoestq
```
Using custom data configuration default
Downloading and preparing dataset csv/default-61468fc71a743ec1 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /julia/cache_home_2/datasets/csv/default-61468fc71a743ec1/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...
Traceback (most recent call last):
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 485, in incomplete_dir
yield tmp_dir
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 604, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 959, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/tqdm-4.49.0-py3.7.egg/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/julia/cache_home_2/modules/datasets_modules/datasets/csv/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/csv.py", line 129, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1029, in __next__
return self.get_chunk()
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1079, in get_chunk
return self.read(nrows=size)
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 1052, in read
index, columns, col_dict = self._engine.read(nrows)
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/pandas-1.2.0-py3.7-linux-x86_64.egg/pandas/io/parsers.py", line 2056, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 37, saw 2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "write_sick.py", line 19, in <module>
'validation': 'sick/validation.csv'})
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/load.py", line 612, in load_dataset
ignore_verifications=ignore_verifications,
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 534, in download_and_prepare
self._save_info()
File "/julia/libs/anaconda3/envs/success/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/julia/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets-1.2.0-py3.7.egg/datasets/builder.py", line 491, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/julia/libs/anaconda3/envs/success/lib/python3.7/shutil.py", line 498, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/julia/libs/anaconda3/envs/success/lib/python3.7/shutil.py", line 496, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/julia/cache_home_2/datasets/csv/default-61468fc71a743ec1/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2.incomplete'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1773/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1772/comments | https://api.github.com/repos/huggingface/datasets/issues/1772/events | https://github.com/huggingface/datasets/issues/1772 | 792,703,797 | MDU6SXNzdWU3OTI3MDM3OTc= | 1,772 | Adding SICK dataset | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,454,531,000 | 1,612,540,165,000 | 1,612,540,165,000 | NONE | null | null | null | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1772/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1771/comments | https://api.github.com/repos/huggingface/datasets/issues/1771/events | https://github.com/huggingface/datasets/issues/1771 | 792,701,276 | MDU6SXNzdWU3OTI3MDEyNzY= | 1,771 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/world2vec/subscriptions",
"organizations_url": "https://api.github.com/users/world2vec/orgs",
"repos_url": "https://api.github.com/users/world2vec/repos",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/world2vec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,453,232,000 | 1,611,529,589,000 | 1,611,529,589,000 | NONE | null | null | null | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 343, in cached_path
max_retries=download_config.max_retries,
File "/home/tom/pyenv/pystory/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1771/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/events | https://github.com/huggingface/datasets/issues/1770 | 792,698,148 | MDU6SXNzdWU3OTI2OTgxNDg= | 1,770 | how can I combine 2 dataset with different/same features? | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/world2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/world2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/world2vec/subscriptions",
"organizations_url": "https://api.github.com/users/world2vec/orgs",
"repos_url": "https://api.github.com/users/world2vec/repos",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/world2vec/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,451,566,000 | 1,611,531,834,000 | null | NONE | null | null | null | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1769/comments | https://api.github.com/repos/huggingface/datasets/issues/1769/events | https://github.com/huggingface/datasets/issues/1769 | 792,523,284 | MDU6SXNzdWU3OTI1MjMyODQ= | 1,769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | {
"login": "shuaihuaiyi",
"id": 14048129,
"node_id": "MDQ6VXNlcjE0MDQ4MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuaihuaiyi",
"html_url": "https://github.com/shuaihuaiyi",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_url": "https://api.github.com/users/shuaihuaiyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuaihuaiyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuaihuaiyi/subscriptions",
"organizations_url": "https://api.github.com/users/shuaihuaiyi/orgs",
"repos_url": "https://api.github.com/users/shuaihuaiyi/repos",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuaihuaiyi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,396,780,000 | 1,611,570,237,000 | null | NONE | null | null | null | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chinese-roberta-wwm-ext
--train_file
/nfs/volume-377-2/bert/data/test/train.txt
--output_dir
test
--do_train
--per_device_train_batch_size
2
--gradient_accumulation_steps
2
--learning_rate
1e-4
--max_steps
1000
--warmup_steps
10
--save_steps
1000
--save_total_limit
1
--seed
23333
--max_seq_length
512
--preprocessing_num_workers
2
--cache_dir
/nfs/volume-377-2/bert/data/test/cache
```
Where the `/nfs/volume-377-2/bert/data/test/train.txt` is just a toy example with 10000 lines of random string, you should be able to reproduce this error esaily.
Full Traceback:
```
Traceback (most recent call last):
File "/nfs/volume-377-2/bert/transformers/examples/language-modeling/run_mlm_wwm.py", line 398, in <module>
main()
File "/nfs/volume-377-2/bert/transformers/examples/language-modeling/run_mlm_wwm.py", line 325, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1318, in map
transformed_shards = [r.get() for r in results]
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1318, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/pool.py", line 644, in get
raise self._value
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/pool.py", line 424, in _handle_tasks
put(task)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type
obj.__bases__, _dict), obj=obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/home/luban/miniconda3/envs/py36/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/home/luban/miniconda3/envs/py36/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1769/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1768/comments | https://api.github.com/repos/huggingface/datasets/issues/1768/events | https://github.com/huggingface/datasets/pull/1768 | 792,150,745 | MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx | 1,768 | Mention kwargs in the Dataset Formatting docs | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,333,800,000 | 1,612,096,390,000 | 1,611,566,099,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1768",
"html_url": "https://github.com/huggingface/datasets/pull/1768",
"diff_url": "https://github.com/huggingface/datasets/pull/1768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1768.patch",
"merged_at": 1611566099000
} | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1768/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1767/comments | https://api.github.com/repos/huggingface/datasets/issues/1767/events | https://github.com/huggingface/datasets/pull/1767 | 792,068,497 | MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2 | 1,767 | Add Librispeech ASR | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,327,277,000 | 1,611,607,087,000 | 1,611,607,062,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1767",
"html_url": "https://github.com/huggingface/datasets/pull/1767",
"diff_url": "https://github.com/huggingface/datasets/pull/1767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1767.patch",
"merged_at": 1611607062000
} | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` format, the speech files are not directly prepared to a float32-array, but instead just the path to the array file is stored. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1767/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1766/comments | https://api.github.com/repos/huggingface/datasets/issues/1766/events | https://github.com/huggingface/datasets/issues/1766 | 792,044,105 | MDU6SXNzdWU3OTIwNDQxMDU= | 1,766 | Issues when run two programs compute the same metrics | {
"login": "lamthuy",
"id": 8089862,
"node_id": "MDQ6VXNlcjgwODk4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamthuy",
"html_url": "https://github.com/lamthuy",
"followers_url": "https://api.github.com/users/lamthuy/followers",
"following_url": "https://api.github.com/users/lamthuy/following{/other_user}",
"gists_url": "https://api.github.com/users/lamthuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lamthuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lamthuy/subscriptions",
"organizations_url": "https://api.github.com/users/lamthuy/orgs",
"repos_url": "https://api.github.com/users/lamthuy/repos",
"events_url": "https://api.github.com/users/lamthuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/lamthuy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,325,375,000 | 1,612,262,286,000 | 1,612,262,286,000 | NONE | null | null | null | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch_9_label
avg_loss = valid(epoch, args.batch, args.validation, args.with_label)
File "train_matching_min.py", line 93, in valid
bleu += eval.compute()
File "/u/tlhoang/projects/seal/match/models/eval.py", line 23, in compute
return self.metric.compute()['score']
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 387, in compute
self._finalize()
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 355, in _finalize
self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 231, in read_files
pa_table = self._read_files(files)
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 170, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 299, in _get_dataset_from_filename
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 481, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Expected to read 1819307375 metadata bytes, but only read 454396
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1766/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1765/comments | https://api.github.com/repos/huggingface/datasets/issues/1765/events | https://github.com/huggingface/datasets/issues/1765 | 791,553,065 | MDU6SXNzdWU3OTE1NTMwNjU= | 1,765 | Error iterating over Dataset with DataLoader | {
"login": "EvanZ",
"id": 1295082,
"node_id": "MDQ6VXNlcjEyOTUwODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EvanZ",
"html_url": "https://github.com/EvanZ",
"followers_url": "https://api.github.com/users/EvanZ/followers",
"following_url": "https://api.github.com/users/EvanZ/following{/other_user}",
"gists_url": "https://api.github.com/users/EvanZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EvanZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EvanZ/subscriptions",
"organizations_url": "https://api.github.com/users/EvanZ/orgs",
"repos_url": "https://api.github.com/users/EvanZ/repos",
"events_url": "https://api.github.com/users/EvanZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/EvanZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,269,805,000 | 1,638,879,753,000 | 1,611,373,454,000 | NONE | null | null | null | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 21365, 4515, 8618, 1113,
102]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])}
```
When I try to iterate as in the docs, I get errors:
```
dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)
next(iter(dataloader))
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-05180ba8aa35> in <module>()
1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)
----> 2 next(iter(dataloader))
3 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, loader)
411 self._timeout = loader.timeout
412 self._collate_fn = loader.collate_fn
--> 413 self._sampler_iter = iter(self._index_sampler)
414 self._base_seed = torch.empty((), dtype=torch.int64).random_(generator=loader.generator).item()
415 self._persistent_workers = loader.persistent_workers
TypeError: 'int' object is not iterable
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1765/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1764/comments | https://api.github.com/repos/huggingface/datasets/issues/1764/events | https://github.com/huggingface/datasets/issues/1764 | 791,486,860 | MDU6SXNzdWU3OTE0ODY4NjA= | 1,764 | Connection Issues | {
"login": "SaeedNajafi",
"id": 12455298,
"node_id": "MDQ6VXNlcjEyNDU1Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaeedNajafi",
"html_url": "https://github.com/SaeedNajafi",
"followers_url": "https://api.github.com/users/SaeedNajafi/followers",
"following_url": "https://api.github.com/users/SaeedNajafi/following{/other_user}",
"gists_url": "https://api.github.com/users/SaeedNajafi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaeedNajafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaeedNajafi/subscriptions",
"organizations_url": "https://api.github.com/users/SaeedNajafi/orgs",
"repos_url": "https://api.github.com/users/SaeedNajafi/repos",
"events_url": "https://api.github.com/users/SaeedNajafi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaeedNajafi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,262,569,000 | 1,611,262,819,000 | 1,611,262,802,000 | NONE | null | null | null | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_dataset("race", "all", split="train")
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path
max_retries=download_config.max_retries,
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/race/race.py
```
Or
```
Traceback (most recent call last):
File "src/train.py", line 105, in <module>
rouge = datasets.load_metric("rouge")
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 500, in load_metric
dataset=False,
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path
max_retries=download_config.max_retries,
File "/Users/saeed/Desktop/codes/repos/dreamscape-qa/env/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/metrics/rouge/rouge.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1764/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1763/comments | https://api.github.com/repos/huggingface/datasets/issues/1763/events | https://github.com/huggingface/datasets/pull/1763 | 791,389,763 | MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1 | 1,763 | PAWS-X: Fix csv Dictreader splitting data on quotes | {
"login": "gowtham1997",
"id": 9641196,
"node_id": "MDQ6VXNlcjk2NDExOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gowtham1997",
"html_url": "https://github.com/gowtham1997",
"followers_url": "https://api.github.com/users/gowtham1997/followers",
"following_url": "https://api.github.com/users/gowtham1997/following{/other_user}",
"gists_url": "https://api.github.com/users/gowtham1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gowtham1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gowtham1997/subscriptions",
"organizations_url": "https://api.github.com/users/gowtham1997/orgs",
"repos_url": "https://api.github.com/users/gowtham1997/repos",
"events_url": "https://api.github.com/users/gowtham1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/gowtham1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,253,261,000 | 1,611,310,473,000 | 1,611,310,425,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1763",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"merged_at": 1611310425000
} |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
```
changed `data = csv.DictReader(f, delimiter="\t")` to `data = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)` in the dataloader to make csv module not split by quotes.
The results are as expected for all languages after the change. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1763/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1762/comments | https://api.github.com/repos/huggingface/datasets/issues/1762/events | https://github.com/huggingface/datasets/issues/1762 | 791,226,007 | MDU6SXNzdWU3OTEyMjYwMDc= | 1,762 | Unable to format dataset to CUDA Tensors | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,243,083,000 | 1,612,250,002,000 | 1,612,250,002,000 | CONTRIBUTOR | null | null | null | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't support assignment:
```
columns=['input_ids', 'token_type_ids', 'attention_mask', 'start_positions','end_positions']
samples.set_format(type='torch', columns = columns)
for column in columns:
samples[column].to(torch.device(self.config.device))
```
There should be an option to do so, or if there is already a way to do this, please let me know.
Thanks,
Gunjan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1762/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/events | https://github.com/huggingface/datasets/pull/1761 | 791,150,858 | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | 1,761 | Add SILICONE benchmark | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,239,352,000 | 1,612,449,168,000 | 1,611,669,031,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1761",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"merged_at": 1611669031000
} | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1760/comments | https://api.github.com/repos/huggingface/datasets/issues/1760/events | https://github.com/huggingface/datasets/pull/1760 | 791,110,857 | MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0 | 1,760 | More tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,237,010,000 | 1,611,308,401,000 | 1,611,308,400,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1760",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"merged_at": 1611308400000
} | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1760/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1759/comments | https://api.github.com/repos/huggingface/datasets/issues/1759/events | https://github.com/huggingface/datasets/issues/1759 | 790,992,226 | MDU6SXNzdWU3OTA5OTIyMjY= | 1,759 | wikipedia dataset incomplete | {
"login": "ChrisDelClea",
"id": 19912393,
"node_id": "MDQ6VXNlcjE5OTEyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisDelClea",
"html_url": "https://github.com/ChrisDelClea",
"followers_url": "https://api.github.com/users/ChrisDelClea/followers",
"following_url": "https://api.github.com/users/ChrisDelClea/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisDelClea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisDelClea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisDelClea/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisDelClea/orgs",
"repos_url": "https://api.github.com/users/ChrisDelClea/repos",
"events_url": "https://api.github.com/users/ChrisDelClea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisDelClea/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,229,635,000 | 1,611,249,731,000 | 1,611,249,666,000 | NONE | null | null | null | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants according to the webpage (https://de.wikipedia.org/wiki/Thorey-sur-Ouche).
The pickle file however shows: französische Gemeinde mit Einwohnern (Stand).
Is it possible to fix this?
Best regards
Chris
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1759/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1758/comments | https://api.github.com/repos/huggingface/datasets/issues/1758/events | https://github.com/huggingface/datasets/issues/1758 | 790,626,116 | MDU6SXNzdWU3OTA2MjYxMTY= | 1,758 | dataset.search() (elastic) cannot reliably retrieve search results | {
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions",
"organizations_url": "https://api.github.com/users/afogarty85/orgs",
"repos_url": "https://api.github.com/users/afogarty85/repos",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"received_events_url": "https://api.github.com/users/afogarty85/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,195,997,000 | 1,611,275,150,000 | 1,611,275,150,000 | NONE | null | null | null | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data that looks like the following from the HF SQuAD 2.0 data set:
```
['57318658e6313a140071d02b',
'56f7165e3d8e2e1400e3733a',
'570e2f6e0b85d914000d7d21',
'5727e58aff5b5019007d97d0',
'5a3b5a503ff257001ab8441f',
'57262fab271a42140099d725']
```
To reproduce the issue, try:
```
from datasets import load_dataset, load_metric
from transformers import BertTokenizerFast, BertForQuestionAnswering
from elasticsearch import Elasticsearch
import numpy as np
import collections
from tqdm.auto import tqdm
import torch
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
pad_on_right = tokenizer.padding_side == "right"
squad_v2 = True
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(list(o) if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
# build base examples, features set of training data
shuffled_idx = pd.read_csv('https://raw.githubusercontent.com/afogarty85/temp/main/idx.csv')['idx'].to_list()
examples = load_dataset("squad_v2").shuffle(seed=1)['train']
features = load_dataset("squad_v2").shuffle(seed=1)['train'].map(
prepare_validation_features,
batched=True,
remove_columns=['answers', 'context', 'id', 'question', 'title'])
# reorder features by the training process
features = features.select(indices=shuffled_idx)
# get the example ids to match with the "example" data; get unique entries
id_list = list(dict.fromkeys(features['example_id']))
# now search for their index positions in the examples data set; load elastic search
es = Elasticsearch([{'host': 'localhost'}]).ping()
# add an index to the id column for the examples
examples.add_elasticsearch_index(column='id')
# retrieve the example index
example_idx_k1 = [examples.search(index_name='id', query=i, k=1).indices for i in id_list]
example_idx_k1 = [item for sublist in example_idx_k1 for item in sublist]
example_idx_k2 = [examples.search(index_name='id', query=i, k=3).indices for i in id_list]
example_idx_k2 = [item for sublist in example_idx_k2 for item in sublist]
len(example_idx_k1) # should be 130319
len(example_idx_k2) # should be 130319
#trial 1 lengths:
# k=1: 130314
# k=3: 130319
# trial 2:
# just run k=3 first: 130310
# try k=1 after k=3: 130319
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1758/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1757/comments | https://api.github.com/repos/huggingface/datasets/issues/1757/events | https://github.com/huggingface/datasets/issues/1757 | 790,466,509 | MDU6SXNzdWU3OTA0NjY1MDk= | 1,757 | FewRel | {
"login": "dspoka",
"id": 6183050,
"node_id": "MDQ6VXNlcjYxODMwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dspoka",
"html_url": "https://github.com/dspoka",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dspoka/subscriptions",
"organizations_url": "https://api.github.com/users/dspoka/orgs",
"repos_url": "https://api.github.com/users/dspoka/repos",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"received_events_url": "https://api.github.com/users/dspoka/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,186,963,000 | 1,615,258,325,000 | 1,615,214,092,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
author={Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong},
booktitle={EMNLP},
year={2018}}
- **Data:** https://github.com/ProKil/FewRel
- **Motivation:** relationship extraction dataset that's been used by some state of the art systems that should be incorporated.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1757/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1756/comments | https://api.github.com/repos/huggingface/datasets/issues/1756/events | https://github.com/huggingface/datasets/issues/1756 | 790,380,028 | MDU6SXNzdWU3OTAzODAwMjg= | 1,756 | Ccaligned multilingual translation dataset | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,181,124,000 | 1,614,594,981,000 | 1,614,594,981,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French).
- **Paper:** *link to the dataset paper if available*
- https://www.aclweb.org/anthology/2020.emnlp-main.480.pdf
- **Data:** *link to the Github repository or current dataset location*
- http://www.statmt.org/cc-aligned/
- **Motivation:** *what are some good reasons to have this dataset*
- The authors says it's an high quality dataset.
- it's pretty large and includes many language pairs. It could be interesting training mt5 on this task.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1756/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1755/comments | https://api.github.com/repos/huggingface/datasets/issues/1755/events | https://github.com/huggingface/datasets/issues/1755 | 790,324,734 | MDU6SXNzdWU3OTAzMjQ3MzQ= | 1,755 | Using select/reordering datasets slows operations down immensely | {
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions",
"organizations_url": "https://api.github.com/users/afogarty85/orgs",
"repos_url": "https://api.github.com/users/afogarty85/repos",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"received_events_url": "https://api.github.com/users/afogarty85/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,177,132,000 | 1,611,180,219,000 | 1,611,180,219,000 | NONE | null | null | null | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below example should be reproducible and I have ran myself down this path because I want to use HF's scoring functions and helpful data preparation, but use my own trainer. The training process uses shuffle and therefore the order I trained on no longer matches the original data set order. So, to score my results correctly, the original data set needs to match the order of the training. This requires that I: (1) collect the index for each row of data emitted during training, and (2) use this index information to re-order the datasets correctly so the orders match when I go to score.
The problem is, the dataset class starts performing very poorly as soon as you start manipulating its order by immense magnitudes.
```
from datasets import load_dataset, load_metric
from transformers import BertTokenizerFast, BertForQuestionAnswering
from elasticsearch import Elasticsearch
import numpy as np
import collections
from tqdm.auto import tqdm
import torch
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
pad_on_right = tokenizer.padding_side == "right"
squad_v2 = True
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(list(o) if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
def postprocess_qa_predictions(examples, features, starting_logits, ending_logits, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = starting_logits, ending_logits
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
# The dictionaries we have to fill.
predictions = collections.OrderedDict()
# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]
min_null_score = None # Only used if squad_v2 is True.
valid_answers = []
context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]
# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}
# Let's pick our final answer: the best one or the null answer (only for squad_v2)
if not squad_v2:
predictions[example["id"]] = best_answer["text"]
else:
answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
predictions[example["id"]] = answer
return predictions
# build base examples, features from training data
examples = load_dataset("squad_v2").shuffle(seed=5)['train']
features = load_dataset("squad_v2").shuffle(seed=5)['train'].map(
prepare_validation_features,
batched=True,
remove_columns=['answers', 'context', 'id', 'question', 'title'])
# sim some shuffled training indices that we want to use to re-order the data to compare how we did
shuffle_idx = np.arange(0, 131754)
np.random.shuffle(shuffle_idx)
# create a new dataset with rows selected following the training shuffle
features = features.select(indices=shuffle_idx)
# get unique example ids to match with the "example" data
id_list = list(dict.fromkeys(features['example_id']))
# now search for their index positions; load elastic search
es = Elasticsearch([{'host': 'localhost'}]).ping()
# add an index to the id column for the examples
examples.add_elasticsearch_index(column='id')
# search the examples for their index position
example_idx = [examples.search(index_name='id', query=i, k=1).indices for i in id_list]
# drop the elastic search
examples.drop_index(index_name='id')
# put examples in the right order
examples = examples.select(indices=example_idx)
# generate some fake data
logits = {'starting_logits': torch.randn(131754, 384), 'ending_logits': torch.randn(131754, 384)}
def score_squad(logits, n_best_size, max_answer):
# proceed with QA calculation
final_predictions = postprocess_qa_predictions(examples=examples,
features=features,
starting_logits=logits['starting_logits'],
ending_logits=logits['ending_logits'],
n_best_size=20,
max_answer_length=30)
metric = load_metric("squad_v2")
formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in examples]
metrics = metric.compute(predictions=formatted_predictions, references=references)
return metrics
metrics = score_squad(logits, n_best_size=20, max_answer=30)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1755/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1754/comments | https://api.github.com/repos/huggingface/datasets/issues/1754/events | https://github.com/huggingface/datasets/pull/1754 | 789,881,730 | MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw | 1,754 | Use a config id in the cache directory names for custom configs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,141,060,000 | 1,611,565,927,000 | 1,611,565,926,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1754",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"merged_at": 1611565926000
} | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from datasets import load_dataset
mnli = load_dataset("glue", "mnli")
mnli_custom = load_dataset("glue", "mnli", label_classes=["contradiction", "entailment", "neutral"])
```
I fixed that by extending the cache directory definition of a dataset that is being generated.
Instead of using the config name in the cache directory name, I switched to using a `config_id`.
By default it is equal to the config name.
However the name of a config is not sufficent to have a unique identifier for the dataset being generated since it doesn't take into account:
- the config kwargs that can be used to overwrite attributes
- the custom features used to write the dataset
- the data_files for json/text/csv/pandas datasets
Therefore the config id is just the config name with an optional suffix based on these.
In particular taking into account the config kwargs fixes the issue with the `label_classes` above.
I completed the current test cases by adding the case that was missing: overwriting an already existing config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1754/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1753/comments | https://api.github.com/repos/huggingface/datasets/issues/1753/events | https://github.com/huggingface/datasets/pull/1753 | 789,867,685 | MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx | 1,753 | fix comet citations | {
"login": "ricardorei",
"id": 17256847,
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardorei",
"html_url": "https://github.com/ricardorei",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,139,958,000 | 1,611,153,570,000 | 1,611,153,570,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1753",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"merged_at": 1611153570000
} | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1753/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1752/comments | https://api.github.com/repos/huggingface/datasets/issues/1752/events | https://github.com/huggingface/datasets/pull/1752 | 789,822,459 | MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5 | 1,752 | COMET metric citation | {
"login": "ricardorei",
"id": 17256847,
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardorei",
"html_url": "https://github.com/ricardorei",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,136,483,000 | 1,611,138,427,000 | 1,611,138,302,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"merged_at": null
} | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c85fdac2938.png">
This pull request is only intended to fix that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1752/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1751/comments | https://api.github.com/repos/huggingface/datasets/issues/1751/events | https://github.com/huggingface/datasets/pull/1751 | 789,232,980 | MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2 | 1,751 | Updated README for the Social Bias Frames dataset | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,078,780,000 | 1,611,154,612,000 | 1,611,154,612,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1751",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"merged_at": 1611154612000
} | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1751/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1750/comments | https://api.github.com/repos/huggingface/datasets/issues/1750/events | https://github.com/huggingface/datasets/pull/1750 | 788,668,085 | MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1 | 1,750 | Fix typo in README.md of cnn_dailymail | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,611,025,565,000 | 1,611,054,449,000 | 1,611,049,723,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1750",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"merged_at": 1611049723000
} | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1750/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1749/comments | https://api.github.com/repos/huggingface/datasets/issues/1749/events | https://github.com/huggingface/datasets/pull/1749 | 788,476,639 | MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5 | 1,749 | Added metadata and correct splits for swda. | {
"login": "gmihaila",
"id": 22454783,
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmihaila",
"html_url": "https://github.com/gmihaila",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions",
"organizations_url": "https://api.github.com/users/gmihaila/orgs",
"repos_url": "https://api.github.com/users/gmihaila/repos",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmihaila/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,994,992,000 | 1,611,948,952,000 | 1,611,945,488,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1749",
"html_url": "https://github.com/huggingface/datasets/pull/1749",
"diff_url": "https://github.com/huggingface/datasets/pull/1749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1749.patch",
"merged_at": 1611945488000
} | Switchboard Dialog Act Corpus
I made some changes following @bhavitvyamalik recommendation in #1678:
* Contains all metadata.
* Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo.
* Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jurafsky/ws97) and validation split used in [Probabilistic-RNN-DA-Classifier](https://github.com/NathanDuran/Probabilistic-RNN-DA-Classifier). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1749/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1749/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1748/comments | https://api.github.com/repos/huggingface/datasets/issues/1748/events | https://github.com/huggingface/datasets/pull/1748 | 788,431,642 | MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx | 1,748 | add Stuctured Argument Extraction for Korean dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,990,059,000 | 1,631,897,598,000 | 1,611,055,618,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1748",
"html_url": "https://github.com/huggingface/datasets/pull/1748",
"diff_url": "https://github.com/huggingface/datasets/pull/1748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1748.patch",
"merged_at": 1611055618000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1748/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1747/comments | https://api.github.com/repos/huggingface/datasets/issues/1747/events | https://github.com/huggingface/datasets/issues/1747 | 788,299,775 | MDU6SXNzdWU3ODgyOTk3NzU= | 1,747 | datasets slicing with seed | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,978,935,000 | 1,610,981,134,000 | null | NONE | null | null | null | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1747/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1746/comments | https://api.github.com/repos/huggingface/datasets/issues/1746/events | https://github.com/huggingface/datasets/pull/1746 | 788,188,184 | MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw | 1,746 | Fix release conda worflow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,969,350,000 | 1,610,969,484,000 | 1,610,969,483,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1746",
"html_url": "https://github.com/huggingface/datasets/pull/1746",
"diff_url": "https://github.com/huggingface/datasets/pull/1746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1746.patch",
"merged_at": 1610969483000
} | The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1746/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1745/comments | https://api.github.com/repos/huggingface/datasets/issues/1745/events | https://github.com/huggingface/datasets/issues/1745 | 787,838,256 | MDU6SXNzdWU3ODc4MzgyNTY= | 1,745 | difference between wsc and wsc.fixed for superglue | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,931,019,000 | 1,610,967,763,000 | 1,610,931,574,000 | NONE | null | null | null | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1745/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1744/comments | https://api.github.com/repos/huggingface/datasets/issues/1744/events | https://github.com/huggingface/datasets/pull/1744 | 787,649,811 | MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4 | 1,744 | Add missing "brief" entries to reuters | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,870,329,000 | 1,610,969,169,000 | 1,610,969,169,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1744",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"merged_at": 1610969169000
} | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1744/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1743/comments | https://api.github.com/repos/huggingface/datasets/issues/1743/events | https://github.com/huggingface/datasets/issues/1743 | 787,631,412 | MDU6SXNzdWU3ODc2MzE0MTI= | 1,743 | Issue while Creating Custom Metric | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,866,874,000 | 1,611,333,900,000 | null | CONTRIBUTOR | null | null | null | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
# This defines the format of each prediction and reference
features = datasets.Features({'predictions':datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")),"offset_mapping":datasets.Sequence(datasets.Value("int32")),'text':datasets.Sequence(datasets.Value('string')),"ground":datasets.Sequence(datasets.Value("int32")),}),
# Homepage of the metric for documentation
homepage="http://metric.homepage",
# Additional links to the codebase or references
codebase_urls=["http://github.com/path/to/codebase/of/new_metric"],
reference_urls=["http://path.to.reference.url/new_metric"]
)
def _compute(self,predictions,references,text,offset_mapping,spans):
pred_spans = []
for i,preds in enumerate(predictions):
current_preds = []
for j,token_preds in enumerate(preds):
if (preds>0.5):
current_preds+=list(range(offset_mapping[i][j][0],offset_mapping[i][j][1]))
pred_spans.append(current_spans)
return {
"Token Wise F1": f1_score(references,predictions,labels=[0,1]),
"Offset Wise F1": np.mean([f1(preds,gold) for preds,fold in zip(pred_spans,ground)])
}
```
I believe this is not correct. But that's not the issue I am facing right now. I get this error :
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-144-ed7349b50821> in <module>()
----> 1 new_metric.compute(predictions=inputs["labels"],references=inputs["labels"], text=inputs["text"], offset_mapping=inputs["offset_mapping"],ground=inputs["ground"] )
2 frames
/usr/local/lib/python3.6/dist-packages/datasets/features.py in encode_batch(self, batch)
802 encoded_batch = {}
803 if set(batch) != set(self):
--> 804 print(batch)
805 print(self)
806 raise ValueError("Column mismatch between batch {} and features {}".format(set(batch), set(self)))
ValueError: Column mismatch between batch {'references', 'predictions'} and features {'ground', 'predictions', 'offset_mapping', 'text', 'references'}
```
On checking the features.py file, I see the call is made from add_batch() in metrics.py which only takes in predictions and references.
How do I make my custom metric work? Will it work with a trainer even if I am able to make this metric work?
Thanks,
Gunjan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1743/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1742/comments | https://api.github.com/repos/huggingface/datasets/issues/1742/events | https://github.com/huggingface/datasets/pull/1742 | 787,623,640 | MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw | 1,742 | Add GLUE Compat (compatible with transformers<3.5.0) | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,862,865,000 | 1,617,021,810,000 | 1,617,021,810,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1742",
"html_url": "https://github.com/huggingface/datasets/pull/1742",
"diff_url": "https://github.com/huggingface/datasets/pull/1742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1742.patch",
"merged_at": null
} | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MNLI and AX. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1742/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1741/comments | https://api.github.com/repos/huggingface/datasets/issues/1741/events | https://github.com/huggingface/datasets/issues/1741 | 787,327,060 | MDU6SXNzdWU3ODczMjcwNjA= | 1,741 | error when run fine_tuning on text_classification | {
"login": "XiaoYang66",
"id": 43234824,
"node_id": "MDQ6VXNlcjQzMjM0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiaoYang66",
"html_url": "https://github.com/XiaoYang66",
"followers_url": "https://api.github.com/users/XiaoYang66/followers",
"following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}",
"gists_url": "https://api.github.com/users/XiaoYang66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XiaoYang66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XiaoYang66/subscriptions",
"organizations_url": "https://api.github.com/users/XiaoYang66/orgs",
"repos_url": "https://api.github.com/users/XiaoYang66/repos",
"events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}",
"received_events_url": "https://api.github.com/users/XiaoYang66/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,763,799,000 | 1,610,764,768,000 | 1,610,764,758,000 | NONE | null | null | null | dataset:sem_eval_2014_task_1
pretrained_model:bert-base-uncased
error description:
when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW).
the error is like this :
`File "train.py", line 69, in <module>
trainer.train()
File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2`
this is my code :
```dataset_name = 'sem_eval_2014_task_1'
num_labels_size = 3
batch_size = 4
model_checkpoint = 'bert-base-uncased'
number_train_epoch = 5
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)
tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)
train_dataset = load_dataset(dataset_name, split='train')
test_dataset = load_dataset(dataset_name, split='test')
train_encoded_dataset = train_dataset.map(tokenize, batched=True)
test_encoded_dataset = test_dataset.map(tokenize, batched=True)
args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=number_train_epoch,
weight_decay=0.01,
do_predict=True,
)
trainer = Trainer(
model=model,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_encoded_dataset,
eval_dataset=test_encoded_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.evaluate()
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1741/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1740/comments | https://api.github.com/repos/huggingface/datasets/issues/1740/events | https://github.com/huggingface/datasets/pull/1740 | 787,264,605 | MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1 | 1,740 | add id_liputan6 dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,751,514,000 | 1,611,150,086,000 | 1,611,150,086,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1740",
"html_url": "https://github.com/huggingface/datasets/pull/1740",
"diff_url": "https://github.com/huggingface/datasets/pull/1740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1740.patch",
"merged_at": 1611150086000
} | id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1740/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1740/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1739/comments | https://api.github.com/repos/huggingface/datasets/issues/1739/events | https://github.com/huggingface/datasets/pull/1739 | 787,219,138 | MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx | 1,739 | fixes and improvements for the WebNLG loader | {
"login": "Shimorina",
"id": 9607332,
"node_id": "MDQ6VXNlcjk2MDczMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9607332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shimorina",
"html_url": "https://github.com/Shimorina",
"followers_url": "https://api.github.com/users/Shimorina/followers",
"following_url": "https://api.github.com/users/Shimorina/following{/other_user}",
"gists_url": "https://api.github.com/users/Shimorina/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shimorina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shimorina/subscriptions",
"organizations_url": "https://api.github.com/users/Shimorina/orgs",
"repos_url": "https://api.github.com/users/Shimorina/repos",
"events_url": "https://api.github.com/users/Shimorina/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shimorina/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,747,123,000 | 1,611,930,846,000 | 1,611,917,583,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1739",
"html_url": "https://github.com/huggingface/datasets/pull/1739",
"diff_url": "https://github.com/huggingface/datasets/pull/1739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1739.patch",
"merged_at": 1611917583000
} | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1739/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1738/comments | https://api.github.com/repos/huggingface/datasets/issues/1738/events | https://github.com/huggingface/datasets/pull/1738 | 786,068,440 | MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4 | 1,738 | Conda support | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,637,085,000 | 1,610,705,300,000 | 1,610,705,299,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1738",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"merged_at": 1610705298000
} | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1738/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1737/comments | https://api.github.com/repos/huggingface/datasets/issues/1737/events | https://github.com/huggingface/datasets/pull/1737 | 785,606,286 | MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5 | 1,737 | update link in TLC to be github links | {
"login": "chameleonTK",
"id": 6429850,
"node_id": "MDQ6VXNlcjY0Mjk4NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chameleonTK",
"html_url": "https://github.com/chameleonTK",
"followers_url": "https://api.github.com/users/chameleonTK/followers",
"following_url": "https://api.github.com/users/chameleonTK/following{/other_user}",
"gists_url": "https://api.github.com/users/chameleonTK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chameleonTK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chameleonTK/subscriptions",
"organizations_url": "https://api.github.com/users/chameleonTK/orgs",
"repos_url": "https://api.github.com/users/chameleonTK/repos",
"events_url": "https://api.github.com/users/chameleonTK/events{/privacy}",
"received_events_url": "https://api.github.com/users/chameleonTK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,592,561,000 | 1,610,619,924,000 | 1,610,619,924,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1737",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"merged_at": 1610619924000
} | Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1737/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1736/comments | https://api.github.com/repos/huggingface/datasets/issues/1736/events | https://github.com/huggingface/datasets/pull/1736 | 785,433,854 | MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw | 1,736 | Adjust BrWaC dataset features name | {
"login": "jonatasgrosman",
"id": 5097052,
"node_id": "MDQ6VXNlcjUwOTcwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonatasgrosman",
"html_url": "https://github.com/jonatasgrosman",
"followers_url": "https://api.github.com/users/jonatasgrosman/followers",
"following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions",
"organizations_url": "https://api.github.com/users/jonatasgrosman/orgs",
"repos_url": "https://api.github.com/users/jonatasgrosman/repos",
"events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonatasgrosman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,570,344,000 | 1,610,620,178,000 | 1,610,620,178,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1736",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"merged_at": 1610620178000
} | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragraphs" with a list of "sentences".
I confused myself trying to use the dataset with these names. So I think it's better to change it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1736/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1735/comments | https://api.github.com/repos/huggingface/datasets/issues/1735/events | https://github.com/huggingface/datasets/pull/1735 | 785,184,740 | MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw | 1,735 | Update add new dataset template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,550,489,000 | 1,610,637,361,000 | 1,610,637,360,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"merged_at": 1610637360000
} | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1735/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1734/comments | https://api.github.com/repos/huggingface/datasets/issues/1734/events | https://github.com/huggingface/datasets/pull/1734 | 784,956,707 | MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz | 1,734 | Fix empty token bug for `thainer` and `lst20` | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,531,709,000 | 1,610,620,938,000 | 1,610,620,938,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1734",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"merged_at": 1610620938000
} | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1734/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1733/comments | https://api.github.com/repos/huggingface/datasets/issues/1733/events | https://github.com/huggingface/datasets/issues/1733 | 784,903,002 | MDU6SXNzdWU3ODQ5MDMwMDI= | 1,733 | connection issue with glue, what is the data url for glue? | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,527,060,000 | 1,628,100,835,000 | 1,628,100,835,000 | NONE | null | null | null | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1733/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1732/comments | https://api.github.com/repos/huggingface/datasets/issues/1732/events | https://github.com/huggingface/datasets/pull/1732 | 784,874,490 | MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx | 1,732 | [GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification. | {
"login": "mounicam",
"id": 11708999,
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mounicam",
"html_url": "https://github.com/mounicam",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"repos_url": "https://api.github.com/users/mounicam/repos",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,524,219,000 | 1,610,619,581,000 | 1,610,619,581,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1732",
"html_url": "https://github.com/huggingface/datasets/pull/1732",
"diff_url": "https://github.com/huggingface/datasets/pull/1732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1732.patch",
"merged_at": 1610619580000
} | We want to use TurkCorpus for validation and testing of the sentence simplification task. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1732/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1731/comments | https://api.github.com/repos/huggingface/datasets/issues/1731/events | https://github.com/huggingface/datasets/issues/1731 | 784,744,674 | MDU6SXNzdWU3ODQ3NDQ2NzQ= | 1,731 | Couldn't reach swda.py | {
"login": "yangp725",
"id": 13365326,
"node_id": "MDQ6VXNlcjEzMzY1MzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangp725",
"html_url": "https://github.com/yangp725",
"followers_url": "https://api.github.com/users/yangp725/followers",
"following_url": "https://api.github.com/users/yangp725/following{/other_user}",
"gists_url": "https://api.github.com/users/yangp725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangp725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangp725/subscriptions",
"organizations_url": "https://api.github.com/users/yangp725/orgs",
"repos_url": "https://api.github.com/users/yangp725/repos",
"events_url": "https://api.github.com/users/yangp725/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangp725/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,506,660,000 | 1,610,536,660,000 | 1,610,536,660,000 | NONE | null | null | null | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1731/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1730/comments | https://api.github.com/repos/huggingface/datasets/issues/1730/events | https://github.com/huggingface/datasets/pull/1730 | 784,617,525 | MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0 | 1,730 | Add MNIST dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,488,082,000 | 1,610,533,187,000 | 1,610,533,186,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1730",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"merged_at": 1610533186000
} | This PR adds the MNIST dataset to the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1730/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1729/comments | https://api.github.com/repos/huggingface/datasets/issues/1729/events | https://github.com/huggingface/datasets/issues/1729 | 784,565,898 | MDU6SXNzdWU3ODQ1NjU4OTg= | 1,729 | Is there support for Deep learning datasets? | {
"login": "pablodz",
"id": 28235457,
"node_id": "MDQ6VXNlcjI4MjM1NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pablodz",
"html_url": "https://github.com/pablodz",
"followers_url": "https://api.github.com/users/pablodz/followers",
"following_url": "https://api.github.com/users/pablodz/following{/other_user}",
"gists_url": "https://api.github.com/users/pablodz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pablodz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pablodz/subscriptions",
"organizations_url": "https://api.github.com/users/pablodz/orgs",
"repos_url": "https://api.github.com/users/pablodz/repos",
"events_url": "https://api.github.com/users/pablodz/events{/privacy}",
"received_events_url": "https://api.github.com/users/pablodz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,482,961,000 | 1,617,164,647,000 | 1,617,164,647,000 | NONE | null | null | null | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1729/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1728/comments | https://api.github.com/repos/huggingface/datasets/issues/1728/events | https://github.com/huggingface/datasets/issues/1728 | 784,458,342 | MDU6SXNzdWU3ODQ0NTgzNDI= | 1,728 | Add an entry to an arrow dataset | {
"login": "ameet-1997",
"id": 18645407,
"node_id": "MDQ6VXNlcjE4NjQ1NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameet-1997",
"html_url": "https://github.com/ameet-1997",
"followers_url": "https://api.github.com/users/ameet-1997/followers",
"following_url": "https://api.github.com/users/ameet-1997/following{/other_user}",
"gists_url": "https://api.github.com/users/ameet-1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameet-1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameet-1997/subscriptions",
"organizations_url": "https://api.github.com/users/ameet-1997/orgs",
"repos_url": "https://api.github.com/users/ameet-1997/repos",
"events_url": "https://api.github.com/users/ameet-1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameet-1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,474,507,000 | 1,610,997,332,000 | 1,610,997,332,000 | NONE | null | null | null | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training set
squad_dataset = load_dataset('squad')
print(squad_dataset['train'][0])
```
Is it possible to add an entry to `squad_dataset`? Something like the following?
``` python
squad_dataset.append({'text': "This is a new sentence"})
```
The motivation for doing this is that I want to transform the sentences in the squad dataset and add them to the original dataset.
If the above doesn't work, is there any other way of achieving the motivation mentioned above? Perhaps by creating a new arrow dataset by using the older one and the transformer sentences?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1728/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1727/comments | https://api.github.com/repos/huggingface/datasets/issues/1727/events | https://github.com/huggingface/datasets/issues/1727 | 784,435,131 | MDU6SXNzdWU3ODQ0MzUxMzE= | 1,727 | BLEURT score calculation raises UnrecognizedFlagError | {
"login": "nadavo",
"id": 6603920,
"node_id": "MDQ6VXNlcjY2MDM5MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nadavo",
"html_url": "https://github.com/nadavo",
"followers_url": "https://api.github.com/users/nadavo/followers",
"following_url": "https://api.github.com/users/nadavo/following{/other_user}",
"gists_url": "https://api.github.com/users/nadavo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nadavo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nadavo/subscriptions",
"organizations_url": "https://api.github.com/users/nadavo/orgs",
"repos_url": "https://api.github.com/users/nadavo/repos",
"events_url": "https://api.github.com/users/nadavo/events{/privacy}",
"received_events_url": "https://api.github.com/users/nadavo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,472,422,000 | 1,618,266,101,000 | null | NONE | null | null | null | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_metric('bleurt')
gen_text = "I am walking on the promenade today"
ref_text = "I am walking along the promenade on this sunny day"
bleurt.compute(predictions=[test_text], references=[test_text])
```
Error Output:
```
Using default BLEURT-Base checkpoint for sequence maximum length 128. You can use a bigger model for better results with e.g.: datasets.load_metric('bleurt', 'bleurt-large-512').
INFO:tensorflow:Reading checkpoint /home/ubuntu/.cache/huggingface/metrics/bleurt/default/downloads/extracted/9aee35580225730ac5422599f35c4986e4c49cafd08082123342b1019720dac4/bleurt-base-128.
INFO:tensorflow:Config file found, reading.
INFO:tensorflow:Will load checkpoint bert_custom
INFO:tensorflow:Performs basic checks...
INFO:tensorflow:... name:bert_custom
INFO:tensorflow:... vocab_file:vocab.txt
INFO:tensorflow:... bert_config_file:bert_config.json
INFO:tensorflow:... do_lower_case:True
INFO:tensorflow:... max_seq_length:128
INFO:tensorflow:Creating BLEURT scorer.
INFO:tensorflow:Loading model...
INFO:tensorflow:BLEURT initialized.
---------------------------------------------------------------------------
UnrecognizedFlagError Traceback (most recent call last)
<ipython-input-12-8b3f4322318a> in <module>
2 gen_text = "I am walking on the promenade today"
3 ref_text = "I am walking along the promenade on this sunny day"
----> 4 bleurt.compute(predictions=[gen_text], references=[ref_text])
~/anaconda3/envs/noved/lib/python3.8/site-packages/datasets/metric.py in compute(self, *args, **kwargs)
396 references = self.data["references"]
397 with temp_seed(self.seed):
--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)
399
400 if self.buf_writer is not None:
~/.cache/huggingface/modules/datasets_modules/metrics/bleurt/b1de33e1cbbcb1dbe276c887efa1fad68c6aff913885108078fa1ad408908778/bleurt.py in _compute(self, predictions, references)
103
104 def _compute(self, predictions, references):
--> 105 scores = self.scorer.score(references=references, candidates=predictions)
106 return {"scores": scores}
~/anaconda3/envs/noved/lib/python3.8/site-packages/bleurt/score.py in score(self, references, candidates, batch_size)
164 """
165 if not batch_size:
--> 166 batch_size = FLAGS.bleurt_batch_size
167
168 candidates, references = list(candidates), list(references)
~/anaconda3/envs/noved/lib/python3.8/site-packages/tensorflow/python/platform/flags.py in __getattr__(self, name)
83 # a flag.
84 if not wrapped.is_parsed():
---> 85 wrapped(_sys.argv)
86 return wrapped.__getattr__(name)
87
~/anaconda3/envs/noved/lib/python3.8/site-packages/absl/flags/_flagvalues.py in __call__(self, argv, known_only)
643 for name, value in unknown_flags:
644 suggestions = _helpers.get_flag_suggestions(name, list(self))
--> 645 raise _exceptions.UnrecognizedFlagError(
646 name, value, suggestions=suggestions)
647
UnrecognizedFlagError: Unknown command line flag 'f'
```
Possible Fix:
Modify `_compute` method https://github.com/huggingface/datasets/blob/7e64851a12263dc74d41c668167918484c8000ab/metrics/bleurt/bleurt.py#L104
to receive a `batch_size` argument, for example:
```
def _compute(self, predictions, references, batch_size=1):
scores = self.scorer.score(references=references, candidates=predictions, batch_size=batch_size)
return {"scores": scores}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1727/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1726/comments | https://api.github.com/repos/huggingface/datasets/issues/1726/events | https://github.com/huggingface/datasets/pull/1726 | 784,336,370 | MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4 | 1,726 | Offline loading | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,464,917,000 | 1,644,921,130,000 | 1,611,074,552,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1726",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"merged_at": 1611074552000
} | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest version of the module.
To make it work in offline mode one suggestion was to reload the latest local version of the module.
I implemented that and I also raise a warning saying that the module that is loaded is the latest local version.
```python
logger.warning(
f"Using the latest cached version of the module from {cached_module_path} since it "
f"couldn't be found locally at {input_path} or remotely ({error_type_that_prevented_reaching_out_remote_stuff})."
)
```
I added tests to make sure it works as expected and I needed to do a few changes in the code to be able to test things properly. In particular I added a parameter `hf_modules_cache` to `init_dynamic_modules` for testing purposes. It makes it possible to have temporary modules caches for testing.
I also added a `offline` context utility that allows to test part of the code by making all the requests fail as if there was no internet.
Close #824, close #761. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1726/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1725/comments | https://api.github.com/repos/huggingface/datasets/issues/1725/events | https://github.com/huggingface/datasets/issues/1725 | 784,182,273 | MDU6SXNzdWU3ODQxODIyNzM= | 1,725 | load the local dataset | {
"login": "xinjicong",
"id": 41193842,
"node_id": "MDQ6VXNlcjQxMTkzODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinjicong",
"html_url": "https://github.com/xinjicong",
"followers_url": "https://api.github.com/users/xinjicong/followers",
"following_url": "https://api.github.com/users/xinjicong/following{/other_user}",
"gists_url": "https://api.github.com/users/xinjicong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinjicong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinjicong/subscriptions",
"organizations_url": "https://api.github.com/users/xinjicong/orgs",
"repos_url": "https://api.github.com/users/xinjicong/repos",
"events_url": "https://api.github.com/users/xinjicong/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinjicong/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,453,575,000 | 1,614,768,943,000 | null | NONE | null | null | null | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1725/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1723/comments | https://api.github.com/repos/huggingface/datasets/issues/1723/events | https://github.com/huggingface/datasets/pull/1723 | 783,982,100 | MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1 | 1,723 | ADD S3 support for downloading and uploading processed datasets | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,435,854,000 | 1,611,680,528,000 | 1,611,680,528,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"merged_at": 1611680527000
} | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Loading `csv` or `json` datasets from s3 is not implemented.
To save/load datasets to s3 you either need to provide an `aws_profile`, which is set up on your machine, per default it uses the `default` profile or you have to pass an `aws_access_key_id` and `aws_secret_access_key`.
The implementation was done with the `fsspec` and `boto3`.
### Example `aws_profile` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
```
</details>
### Example `aws_access_key_id` and `aws_secret_access_key` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk",
aws_access_key_id="fake_access_key",
aws_secret_access_key="fake_secret_key"
)
```
</details>
If you want to load a dataset from a public s3 bucket you can pass `anon=True`
### Example `anon=True` :
<details>
```python
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
load_from_disk("s3://moto-mock-s3-bucketdatasets/sdk",anon=True)
```
</details>
### Full Example
```python
import datasets
dataset = datasets.load_dataset("imdb")
print(f"DatasetDict contains {len(dataset)} datasets")
print(f"train Dataset has the size of: {len(dataset['train'])}")
dataset.save_to_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
remote_dataset = datasets.load_from_disk("s3://moto-mock-s3-bucket/datasets/sdk", aws_profile="hf-sm")
print(f"DatasetDict contains {len(remote_dataset)} datasets")
print(f"train Dataset has the size of: {len(remote_dataset['train'])}")
```
Related to #878
I would also adjust the documentation after the code would be reviewed, as long as I leave the PR in "draft" status. Something that we can consider is renaming the functions and changing the `_disk` maybe to `_filesystem`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1723/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1724/comments | https://api.github.com/repos/huggingface/datasets/issues/1724/events | https://github.com/huggingface/datasets/issues/1724 | 784,023,338 | MDU6SXNzdWU3ODQwMjMzMzg= | 1,724 | could not run models on a offline server successfully | {
"login": "lkcao",
"id": 49967236,
"node_id": "MDQ6VXNlcjQ5OTY3MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkcao",
"html_url": "https://github.com/lkcao",
"followers_url": "https://api.github.com/users/lkcao/followers",
"following_url": "https://api.github.com/users/lkcao/following{/other_user}",
"gists_url": "https://api.github.com/users/lkcao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkcao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkcao/subscriptions",
"organizations_url": "https://api.github.com/users/lkcao/orgs",
"repos_url": "https://api.github.com/users/lkcao/repos",
"events_url": "https://api.github.com/users/lkcao/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkcao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,431,686,000 | 1,614,785,549,000 | null | NONE | null | null | null | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:

is there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/1724/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1722/comments | https://api.github.com/repos/huggingface/datasets/issues/1722/events | https://github.com/huggingface/datasets/pull/1722 | 783,921,679 | MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4 | 1,722 | Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task. | {
"login": "mounicam",
"id": 11708999,
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mounicam",
"html_url": "https://github.com/mounicam",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"repos_url": "https://api.github.com/users/mounicam/repos",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,429,164,000 | 1,610,475,293,000 | 1,610,472,957,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1722",
"html_url": "https://github.com/huggingface/datasets/pull/1722",
"diff_url": "https://github.com/huggingface/datasets/pull/1722.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1722.patch",
"merged_at": 1610472957000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1722/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1721/comments | https://api.github.com/repos/huggingface/datasets/issues/1721/events | https://github.com/huggingface/datasets/pull/1721 | 783,828,428 | MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5 | 1,721 | [Scientific papers] Mirror datasets zip | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,414,140,000 | 1,610,452,155,000 | 1,610,451,707,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1721",
"html_url": "https://github.com/huggingface/datasets/pull/1721",
"diff_url": "https://github.com/huggingface/datasets/pull/1721.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1721.patch",
"merged_at": 1610451707000
} | Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1721/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1720/comments | https://api.github.com/repos/huggingface/datasets/issues/1720/events | https://github.com/huggingface/datasets/pull/1720 | 783,721,833 | MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx | 1,720 | Adding the NorNE dataset for NER | {
"login": "versae",
"id": 173537,
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versae",
"html_url": "https://github.com/versae",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"organizations_url": "https://api.github.com/users/versae/orgs",
"repos_url": "https://api.github.com/users/versae/repos",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"received_events_url": "https://api.github.com/users/versae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,400,853,000 | 1,617,200,629,000 | 1,617,199,997,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1720",
"html_url": "https://github.com/huggingface/datasets/pull/1720",
"diff_url": "https://github.com/huggingface/datasets/pull/1720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1720.patch",
"merged_at": null
} | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1720/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1719/comments | https://api.github.com/repos/huggingface/datasets/issues/1719/events | https://github.com/huggingface/datasets/pull/1719 | 783,557,542 | MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4 | 1,719 | Fix column list comparison in transmit format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,385,836,000 | 1,610,390,703,000 | 1,610,390,702,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1719",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"merged_at": 1610390702000
} | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled column names.
I fixed that by sorting the columns for the comparison and added a test.
To properly test that I added a third column `col_3` to the dummy_dataset used for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1719/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1718/comments | https://api.github.com/repos/huggingface/datasets/issues/1718/events | https://github.com/huggingface/datasets/issues/1718 | 783,474,753 | MDU6SXNzdWU3ODM0NzQ3NTM= | 1,718 | Possible cache miss in datasets | {
"login": "ofirzaf",
"id": 18296312,
"node_id": "MDQ6VXNlcjE4Mjk2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofirzaf",
"html_url": "https://github.com/ofirzaf",
"followers_url": "https://api.github.com/users/ofirzaf/followers",
"following_url": "https://api.github.com/users/ofirzaf/following{/other_user}",
"gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions",
"organizations_url": "https://api.github.com/users/ofirzaf/orgs",
"repos_url": "https://api.github.com/users/ofirzaf/repos",
"events_url": "https://api.github.com/users/ofirzaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/ofirzaf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,379,451,000 | 1,619,591,723,000 | 1,611,629,279,000 | NONE | null | null | null | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading from cache.
Is this a bug or am I doing something wrong?
Is there a way for fix this and avoid all the recomputation?
Thanks
Edit:
transformers==3.5.1
datasets==1.2.0
```
from datasets import load_dataset
from transformers import AutoTokenizer
datasets = load_dataset('wikitext', 'wikitext-103-raw-v1')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
column_names = datasets["train"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=60,
remove_columns=[text_column_name],
load_from_cache_file=True,
)
max_seq_length = tokenizer.model_max_length
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {
k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i: i + max_seq_length]
for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=60,
load_from_cache_file=True,
)
print(tokenized_datasets)
print('finished')
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1718/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1717/comments | https://api.github.com/repos/huggingface/datasets/issues/1717/events | https://github.com/huggingface/datasets/issues/1717 | 783,074,255 | MDU6SXNzdWU3ODMwNzQyNTU= | 1,717 | SciFact dataset - minor changes | {
"login": "dwadden",
"id": 3091916,
"node_id": "MDQ6VXNlcjMwOTE5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwadden",
"html_url": "https://github.com/dwadden",
"followers_url": "https://api.github.com/users/dwadden/followers",
"following_url": "https://api.github.com/users/dwadden/following{/other_user}",
"gists_url": "https://api.github.com/users/dwadden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwadden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwadden/subscriptions",
"organizations_url": "https://api.github.com/users/dwadden/orgs",
"repos_url": "https://api.github.com/users/dwadden/repos",
"events_url": "https://api.github.com/users/dwadden/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwadden/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,342,800,000 | 1,611,629,537,000 | 1,611,629,537,000 | CONTRIBUTOR | null | null | null | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https://github.com/huggingface/datasets/blob/master/datasets/scifact/scifact.py). Can you help me update the version on gcloud?
Thanks,
Dave | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1717/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1716/comments | https://api.github.com/repos/huggingface/datasets/issues/1716/events | https://github.com/huggingface/datasets/pull/1716 | 782,819,006 | MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5 | 1,716 | Add Hatexplain Dataset | {
"login": "kushal2000",
"id": 48222101,
"node_id": "MDQ6VXNlcjQ4MjIyMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushal2000",
"html_url": "https://github.com/kushal2000",
"followers_url": "https://api.github.com/users/kushal2000/followers",
"following_url": "https://api.github.com/users/kushal2000/following{/other_user}",
"gists_url": "https://api.github.com/users/kushal2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushal2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushal2000/subscriptions",
"organizations_url": "https://api.github.com/users/kushal2000/orgs",
"repos_url": "https://api.github.com/users/kushal2000/repos",
"events_url": "https://api.github.com/users/kushal2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushal2000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,285,401,000 | 1,610,979,702,000 | 1,610,979,702,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1716",
"html_url": "https://github.com/huggingface/datasets/pull/1716",
"diff_url": "https://github.com/huggingface/datasets/pull/1716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1716.patch",
"merged_at": 1610979702000
} | Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1716/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1715/comments | https://api.github.com/repos/huggingface/datasets/issues/1715/events | https://github.com/huggingface/datasets/pull/1715 | 782,754,441 | MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5 | 1,715 | add Korean intonation-aided intention identification dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,260,144,000 | 1,631,897,653,000 | 1,610,471,673,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1715",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"merged_at": 1610471672000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1715/timeline | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1714/comments | https://api.github.com/repos/huggingface/datasets/issues/1714/events | https://github.com/huggingface/datasets/pull/1714 | 782,416,276 | MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0 | 1,714 | Adding adversarialQA dataset | {
"login": "maxbartolo",
"id": 15869827,
"node_id": "MDQ6VXNlcjE1ODY5ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxbartolo",
"html_url": "https://github.com/maxbartolo",
"followers_url": "https://api.github.com/users/maxbartolo/followers",
"following_url": "https://api.github.com/users/maxbartolo/following{/other_user}",
"gists_url": "https://api.github.com/users/maxbartolo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxbartolo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxbartolo/subscriptions",
"organizations_url": "https://api.github.com/users/maxbartolo/orgs",
"repos_url": "https://api.github.com/users/maxbartolo/repos",
"events_url": "https://api.github.com/users/maxbartolo/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxbartolo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,142,369,000 | 1,610,553,924,000 | 1,610,553,924,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1714",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"merged_at": 1610553924000
} | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1714/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1713/comments | https://api.github.com/repos/huggingface/datasets/issues/1713/events | https://github.com/huggingface/datasets/issues/1713 | 782,337,723 | MDU6SXNzdWU3ODIzMzc3MjM= | 1,713 | Installation using conda | {
"login": "pranav-s",
"id": 9393002,
"node_id": "MDQ6VXNlcjkzOTMwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranav-s",
"html_url": "https://github.com/pranav-s",
"followers_url": "https://api.github.com/users/pranav-s/followers",
"following_url": "https://api.github.com/users/pranav-s/following{/other_user}",
"gists_url": "https://api.github.com/users/pranav-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranav-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranav-s/subscriptions",
"organizations_url": "https://api.github.com/users/pranav-s/orgs",
"repos_url": "https://api.github.com/users/pranav-s/repos",
"events_url": "https://api.github.com/users/pranav-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranav-s/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,133,135,000 | 1,631,882,860,000 | 1,631,882,860,000 | NONE | null | null | null | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and using pip in a conda environment is generally a bad idea in my experience. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1713/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1712/comments | https://api.github.com/repos/huggingface/datasets/issues/1712/events | https://github.com/huggingface/datasets/pull/1712 | 782,313,097 | MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4 | 1,712 | Silicone | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,130,258,000 | 1,611,238,357,000 | 1,611,225,071,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1712",
"html_url": "https://github.com/huggingface/datasets/pull/1712",
"diff_url": "https://github.com/huggingface/datasets/pull/1712.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1712.patch",
"merged_at": null
} | My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1712/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/1712/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1711/comments | https://api.github.com/repos/huggingface/datasets/issues/1711/events | https://github.com/huggingface/datasets/pull/1711 | 782,129,083 | MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2 | 1,711 | Fix windows path scheme in cached path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,113,556,000 | 1,610,357,000,000 | 1,610,356,999,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"merged_at": 1610356999000
} | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1711/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1710/comments | https://api.github.com/repos/huggingface/datasets/issues/1710/events | https://github.com/huggingface/datasets/issues/1710 | 781,914,951 | MDU6SXNzdWU3ODE5MTQ5NTE= | 1,710 | IsADirectoryError when trying to download C4 | {
"login": "fredriko",
"id": 5771366,
"node_id": "MDQ6VXNlcjU3NzEzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredriko",
"html_url": "https://github.com/fredriko",
"followers_url": "https://api.github.com/users/fredriko/followers",
"following_url": "https://api.github.com/users/fredriko/following{/other_user}",
"gists_url": "https://api.github.com/users/fredriko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredriko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredriko/subscriptions",
"organizations_url": "https://api.github.com/users/fredriko/orgs",
"repos_url": "https://api.github.com/users/fredriko/repos",
"events_url": "https://api.github.com/users/fredriko/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredriko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,091,090,000 | 1,610,531,053,000 | null | NONE | null | null | null | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When running the following code, where `/data/huggingface/unpacked/` contains a single unzipped `wet.paths` file manually downloaded as per the instructions for C4:
```
from datasets import load_dataset
load_dataset("c4", "en", data_dir="/data/huggingface/unpacked", beam_runner='DirectRunner')
```
I get the following stacktrace:
```
/Users/fredriko/venv/misc/bin/python /Users/fredriko/source/misc/main.py
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/fredriko/.cache/huggingface/datasets/c4/en/2.3.0/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283...
Traceback (most recent call last):
File "/Users/fredriko/source/misc/main.py", line 3, in <module>
load_dataset("c4", "en", data_dir="/data/huggingface/unpacked", beam_runner='DirectRunner')
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/load.py", line 612, in load_dataset
ignore_verifications=ignore_verifications,
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 527, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 1066, in _download_and_prepare
pipeline=pipeline,
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/builder.py", line 582, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/fredriko/.cache/huggingface/modules/datasets_modules/datasets/c4/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283/c4.py", line 190, in _split_generators
file_paths = dl_manager.download_and_extract(files_to_download)
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 258, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 189, in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 117, in _record_sizes_checksums
self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict(path)
File "/Users/fredriko/venv/misc/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 80, in get_size_checksum_dict
with open(path, "rb") as f:
IsADirectoryError: [Errno 21] Is a directory: '/'
Process finished with exit code 1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1710/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1709/comments | https://api.github.com/repos/huggingface/datasets/issues/1709/events | https://github.com/huggingface/datasets/issues/1709 | 781,875,640 | MDU6SXNzdWU3ODE4NzU2NDA= | 1,709 | Databases | {
"login": "JimmyJim1",
"id": 68724553,
"node_id": "MDQ6VXNlcjY4NzI0NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JimmyJim1",
"html_url": "https://github.com/JimmyJim1",
"followers_url": "https://api.github.com/users/JimmyJim1/followers",
"following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}",
"gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions",
"organizations_url": "https://api.github.com/users/JimmyJim1/orgs",
"repos_url": "https://api.github.com/users/JimmyJim1/repos",
"events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/JimmyJim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,086,443,000 | 1,610,096,408,000 | 1,610,096,408,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1709/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1708/comments | https://api.github.com/repos/huggingface/datasets/issues/1708/events | https://github.com/huggingface/datasets/issues/1708 | 781,631,455 | MDU6SXNzdWU3ODE2MzE0NTU= | 1,708 | <html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | {
"login": "Louiejay54",
"id": 77126849,
"node_id": "MDQ6VXNlcjc3MTI2ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Louiejay54",
"html_url": "https://github.com/Louiejay54",
"followers_url": "https://api.github.com/users/Louiejay54/followers",
"following_url": "https://api.github.com/users/Louiejay54/following{/other_user}",
"gists_url": "https://api.github.com/users/Louiejay54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Louiejay54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Louiejay54/subscriptions",
"organizations_url": "https://api.github.com/users/Louiejay54/orgs",
"repos_url": "https://api.github.com/users/Louiejay54/repos",
"events_url": "https://api.github.com/users/Louiejay54/events{/privacy}",
"received_events_url": "https://api.github.com/users/Louiejay54/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,055,924,000 | 1,610,096,401,000 | 1,610,096,401,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1708/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1707/comments | https://api.github.com/repos/huggingface/datasets/issues/1707/events | https://github.com/huggingface/datasets/pull/1707 | 781,507,545 | MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2 | 1,707 | Added generated READMEs for datasets that were missing one. | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/followers",
"following_url": "https://api.github.com/users/madlag/following{/other_user}",
"gists_url": "https://api.github.com/users/madlag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madlag/subscriptions",
"organizations_url": "https://api.github.com/users/madlag/orgs",
"repos_url": "https://api.github.com/users/madlag/repos",
"events_url": "https://api.github.com/users/madlag/events{/privacy}",
"received_events_url": "https://api.github.com/users/madlag/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,043,006,000 | 1,610,980,353,000 | 1,610,980,353,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1707",
"html_url": "https://github.com/huggingface/datasets/pull/1707",
"diff_url": "https://github.com/huggingface/datasets/pull/1707.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1707.patch",
"merged_at": 1610980353000
} | This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible.
Code is available here for the moment: https://github.com/madlag/datasets_readme_generator .
We will move it to a Hugging Face repository and to https://huggingface.co/datasets/card-creator/ later.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1707/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1707/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1706/comments | https://api.github.com/repos/huggingface/datasets/issues/1706/events | https://github.com/huggingface/datasets/issues/1706 | 781,494,476 | MDU6SXNzdWU3ODE0OTQ0NzY= | 1,706 | Error when downloading a large dataset on slow connection. | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,041,695,000 | 1,610,534,102,000 | null | CONTRIBUTOR | null | null | null | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): [4/28]
> File "<stdin>", line 1, in <module>
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/load.py", line 610, in load_dataset
> ignore_verifications=ignore_verifications,
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 515, in download_and_prepare
> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/builder.py", line 570, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> File "/home/lucadiliello/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02/openwebtext.py", line 62, in _split_generators
> dl_dir = dl_manager.download_and_extract(_URL)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
> return self.extract(self.download(url_or_urls))
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 235, in extract
> num_proc=num_proc,
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
> return function(data_struct)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 343, in cached_path
> tar_file.extractall(output_path_extracted)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2000, in extractall
> numeric_owner=numeric_owner)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2042, in extract
> numeric_owner=numeric_owner)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2112, in _extract_member
> self.makefile(tarinfo, targetpath)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 2161, in makefile
> copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/tarfile.py", line 253, in copyfileobj
> buf = src.read(remainder)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/lzma.py", line 200, in read
> return self._buffer.read(size)
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 68, in readinto
> data = self.read(len(byte_view))
> File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/_compression.py", line 99, in read
> raise EOFError("Compressed file ended before the "
> EOFError: Compressed file ended before the end-of-stream marker was reached
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1706/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1705/comments | https://api.github.com/repos/huggingface/datasets/issues/1705/events | https://github.com/huggingface/datasets/pull/1705 | 781,474,949 | MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4 | 1,705 | Add information about caching and verifications in "Load a Dataset" docs | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,039,924,000 | 1,610,460,481,000 | 1,610,460,481,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1705",
"html_url": "https://github.com/huggingface/datasets/pull/1705",
"diff_url": "https://github.com/huggingface/datasets/pull/1705.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1705.patch",
"merged_at": 1610460481000
} | Related to #215.
Missing improvements from @lhoestq's #1703. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1705/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1704/comments | https://api.github.com/repos/huggingface/datasets/issues/1704/events | https://github.com/huggingface/datasets/pull/1704 | 781,402,757 | MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1 | 1,704 | Update XSUM Factuality DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/users/vineeths96/followers",
"following_url": "https://api.github.com/users/vineeths96/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions",
"organizations_url": "https://api.github.com/users/vineeths96/orgs",
"repos_url": "https://api.github.com/users/vineeths96/repos",
"events_url": "https://api.github.com/users/vineeths96/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineeths96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,033,834,000 | 1,610,458,204,000 | 1,610,458,204,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1704",
"html_url": "https://github.com/huggingface/datasets/pull/1704",
"diff_url": "https://github.com/huggingface/datasets/pull/1704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1704.patch",
"merged_at": 1610458204000
} | Update XSUM Factuality DatasetCard | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1704/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1703/comments | https://api.github.com/repos/huggingface/datasets/issues/1703/events | https://github.com/huggingface/datasets/pull/1703 | 781,395,146 | MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5 | 1,703 | Improvements regarding caching and fingerprinting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,033,189,000 | 1,611,077,531,000 | 1,611,077,530,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1703",
"html_url": "https://github.com/huggingface/datasets/pull/1703",
"diff_url": "https://github.com/huggingface/datasets/pull/1703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1703.patch",
"merged_at": 1611077530000
} | This PR adds these features:
- Enable/disable caching
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
It is equivalent to setting `load_from_cache` to `False` in dataset transforms.
```python
from datasets import set_caching_enabled
set_caching_enabled(False)
```
- Allow unpicklable functions in `map`
If an unpicklable function is used, then it's not possible to hash it to update the dataset fingerprint that is used to name cache files. To workaround that, a random fingerprint is generated instead and a warning is raised.
```python
logger.warning(
f"Transform {transform} couldn't be hashed properly, a random hash was used instead. "
"Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. "
"If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything."
)
```
and also (open to discussion, EDIT: actually NOT included):
- Enable/disable fingerprinting
Fingerprinting allows to have one deterministic fingerprint per dataset state.
A dataset fingerprint is updated after each transform.
Re-running the same transforms on a dataset in a different session results in the same fingerprint.
Disabling the fingerprinting mechanism makes all the fingerprints random.
Since the caching mechanism uses fingerprints to name the cache files, then cache file names will be different.
Therefore disabling fingerprinting will prevent the caching mechanism from reloading datasets files that have already been computed.
Disabling fingerprinting may speed up the lib for users that don't care about this feature and don't want to use caching.
```python
from datasets import set_fingerprinting_enabled
set_fingerprinting_enabled(False)
```
Other details:
- I renamed the `fingerprint` decorator to `fingerprint_transform` since the name was clearly not explicit. This decorator is used on dataset transform functions to allow them to update fingerprints.
- I added some `ignore_kwargs` when decorating transforms with `fingerprint_transform`, to make the fingerprint update not sensible to kwargs like `load_from_cache` or `cache_file_name`.
Todo: tests for set_fingerprinting_enabled + documentation for all the above features | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1703/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1702/comments | https://api.github.com/repos/huggingface/datasets/issues/1702/events | https://github.com/huggingface/datasets/pull/1702 | 781,383,277 | MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0 | 1,702 | Fix importlib metdata import in py38 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,032,230,000 | 1,610,102,835,000 | 1,610,102,835,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1702",
"html_url": "https://github.com/huggingface/datasets/pull/1702",
"diff_url": "https://github.com/huggingface/datasets/pull/1702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1702.patch",
"merged_at": 1610102834000
} | In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1702/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1701/comments | https://api.github.com/repos/huggingface/datasets/issues/1701/events | https://github.com/huggingface/datasets/issues/1701 | 781,345,717 | MDU6SXNzdWU3ODEzNDU3MTc= | 1,701 | Some datasets miss dataset_infos.json or dummy_data.zip | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/followers",
"following_url": "https://api.github.com/users/madlag/following{/other_user}",
"gists_url": "https://api.github.com/users/madlag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madlag/subscriptions",
"organizations_url": "https://api.github.com/users/madlag/orgs",
"repos_url": "https://api.github.com/users/madlag/repos",
"events_url": "https://api.github.com/users/madlag/events{/privacy}",
"received_events_url": "https://api.github.com/users/madlag/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,029,033,000 | 1,610,458,846,000 | null | CONTRIBUTOR | null | null | null | While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qangaroo
reddit_tifu
super_glue
trivia_qa
web_of_science
wmt14
wmt15
wmt16
wmt17
wmt18
wmt19
xtreme
```
But it seems that some of those last do have a "dummy" directory .
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1701/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1700/comments | https://api.github.com/repos/huggingface/datasets/issues/1700/events | https://github.com/huggingface/datasets/pull/1700 | 781,333,589 | MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2 | 1,700 | Update Curiosity dialogs DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/users/vineeths96/followers",
"following_url": "https://api.github.com/users/vineeths96/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions",
"organizations_url": "https://api.github.com/users/vineeths96/orgs",
"repos_url": "https://api.github.com/users/vineeths96/repos",
"events_url": "https://api.github.com/users/vineeths96/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineeths96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,027,967,000 | 1,610,477,492,000 | 1,610,477,492,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"merged_at": 1610477492000
} | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1700/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1699/comments | https://api.github.com/repos/huggingface/datasets/issues/1699/events | https://github.com/huggingface/datasets/pull/1699 | 781,271,558 | MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5 | 1,699 | Update DBRD dataset card and download URL | {
"login": "benjaminvdb",
"id": 8875786,
"node_id": "MDQ6VXNlcjg4NzU3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminvdb",
"html_url": "https://github.com/benjaminvdb",
"followers_url": "https://api.github.com/users/benjaminvdb/followers",
"following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions",
"organizations_url": "https://api.github.com/users/benjaminvdb/orgs",
"repos_url": "https://api.github.com/users/benjaminvdb/repos",
"events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjaminvdb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,021,803,000 | 1,610,026,899,000 | 1,610,026,859,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1699",
"html_url": "https://github.com/huggingface/datasets/pull/1699",
"diff_url": "https://github.com/huggingface/datasets/pull/1699.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1699.patch",
"merged_at": 1610026859000
} | I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:
1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.
2. I've updated the dataset card.
Cheers! 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1699/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1698/comments | https://api.github.com/repos/huggingface/datasets/issues/1698/events | https://github.com/huggingface/datasets/pull/1698 | 781,152,561 | MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3 | 1,698 | Update Coached Conv Pref DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/users/vineeths96/followers",
"following_url": "https://api.github.com/users/vineeths96/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions",
"organizations_url": "https://api.github.com/users/vineeths96/orgs",
"repos_url": "https://api.github.com/users/vineeths96/repos",
"events_url": "https://api.github.com/users/vineeths96/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineeths96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,010,436,000 | 1,610,125,473,000 | 1,610,125,472,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1698",
"html_url": "https://github.com/huggingface/datasets/pull/1698",
"diff_url": "https://github.com/huggingface/datasets/pull/1698.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1698.patch",
"merged_at": 1610125472000
} | Update Coached Conversation Preferance DatasetCard | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1698/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1697/comments | https://api.github.com/repos/huggingface/datasets/issues/1697/events | https://github.com/huggingface/datasets/pull/1697 | 781,126,579 | MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5 | 1,697 | Update DialogRE DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/users/vineeths96/followers",
"following_url": "https://api.github.com/users/vineeths96/following{/other_user}",
"gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions",
"organizations_url": "https://api.github.com/users/vineeths96/orgs",
"repos_url": "https://api.github.com/users/vineeths96/repos",
"events_url": "https://api.github.com/users/vineeths96/events{/privacy}",
"received_events_url": "https://api.github.com/users/vineeths96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,007,753,000 | 1,610,026,468,000 | 1,610,026,468,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"merged_at": 1610026468000
} | Update the information in the dataset card for the Dialog RE dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1697/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1696/comments | https://api.github.com/repos/huggingface/datasets/issues/1696/events | https://github.com/huggingface/datasets/issues/1696 | 781,096,918 | MDU6SXNzdWU3ODEwOTY5MTg= | 1,696 | Unable to install datasets | {
"login": "glee2429",
"id": 12635475,
"node_id": "MDQ6VXNlcjEyNjM1NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glee2429",
"html_url": "https://github.com/glee2429",
"followers_url": "https://api.github.com/users/glee2429/followers",
"following_url": "https://api.github.com/users/glee2429/following{/other_user}",
"gists_url": "https://api.github.com/users/glee2429/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glee2429/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glee2429/subscriptions",
"organizations_url": "https://api.github.com/users/glee2429/orgs",
"repos_url": "https://api.github.com/users/glee2429/repos",
"events_url": "https://api.github.com/users/glee2429/events{/privacy}",
"received_events_url": "https://api.github.com/users/glee2429/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,610,004,277,000 | 1,610,065,985,000 | 1,610,057,165,000 | NONE | null | null | null | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). However, while I tried to download datasets using `pip install datasets` I got a massive error message after getting stuck at "Installing build dependencies..."
I was wondering if this problem can be fixed by creating a virtual environment, but it didn't help. Can anyone offer some advice on how to fix this issue?
Here's an error message:
`(env) Gas-MacBook-Pro:Downloads destiny$ pip install datasets
Collecting datasets
Using cached datasets-1.2.0-py3-none-any.whl (159 kB)
Collecting numpy>=1.17
Using cached numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl (15.6 MB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-2.0.0.tar.gz (58.9 MB)
....
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceilf' [-Wincompatible-library-redeclaration]
int ceilf (void);
^
_configtest.c:9:5: note: 'ceilf' is a builtin with type 'float (float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintf' [-Wincompatible-library-redeclaration]
int rintf (void);
^
_configtest.c:10:5: note: 'rintf' is a builtin with type 'float (float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncf' [-Wincompatible-library-redeclaration]
int truncf (void);
^
_configtest.c:11:5: note: 'truncf' is a builtin with type 'float (float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtf' [-Wincompatible-library-redeclaration]
int sqrtf (void);
^
_configtest.c:12:5: note: 'sqrtf' is a builtin with type 'float (float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10f' [-Wincompatible-library-redeclaration]
int log10f (void);
^
_configtest.c:13:5: note: 'log10f' is a builtin with type 'float (float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logf' [-Wincompatible-library-redeclaration]
int logf (void);
^
_configtest.c:14:5: note: 'logf' is a builtin with type 'float (float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pf' [-Wincompatible-library-redeclaration]
int log1pf (void);
^
_configtest.c:15:5: note: 'log1pf' is a builtin with type 'float (float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expf' [-Wincompatible-library-redeclaration]
int expf (void);
^
_configtest.c:16:5: note: 'expf' is a builtin with type 'float (float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1f' [-Wincompatible-library-redeclaration]
int expm1f (void);
^
_configtest.c:17:5: note: 'expm1f' is a builtin with type 'float (float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinf' [-Wincompatible-library-redeclaration]
int asinf (void);
^
_configtest.c:18:5: note: 'asinf' is a builtin with type 'float (float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosf' [-Wincompatible-library-redeclaration]
int acosf (void);
^
_configtest.c:19:5: note: 'acosf' is a builtin with type 'float (float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanf' [-Wincompatible-library-redeclaration]
int atanf (void);
^
_configtest.c:20:5: note: 'atanf' is a builtin with type 'float (float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhf' [-Wincompatible-library-redeclaration]
int asinhf (void);
^
_configtest.c:21:5: note: 'asinhf' is a builtin with type 'float (float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshf' [-Wincompatible-library-redeclaration]
int acoshf (void);
^
_configtest.c:22:5: note: 'acoshf' is a builtin with type 'float (float)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhf' [-Wincompatible-library-redeclaration]
int atanhf (void);
^
_configtest.c:23:5: note: 'atanhf' is a builtin with type 'float (float)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotf' [-Wincompatible-library-redeclaration]
int hypotf (void);
^
_configtest.c:24:5: note: 'hypotf' is a builtin with type 'float (float, float)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2f' [-Wincompatible-library-redeclaration]
int atan2f (void);
^
_configtest.c:25:5: note: 'atan2f' is a builtin with type 'float (float, float)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powf' [-Wincompatible-library-redeclaration]
int powf (void);
^
_configtest.c:26:5: note: 'powf' is a builtin with type 'float (float, float)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodf' [-Wincompatible-library-redeclaration]
int fmodf (void);
^
_configtest.c:27:5: note: 'fmodf' is a builtin with type 'float (float, float)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modff' [-Wincompatible-library-redeclaration]
int modff (void);
^
_configtest.c:28:5: note: 'modff' is a builtin with type 'float (float, float *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpf' [-Wincompatible-library-redeclaration]
int frexpf (void);
^
_configtest.c:29:5: note: 'frexpf' is a builtin with type 'float (float, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpf' [-Wincompatible-library-redeclaration]
int ldexpf (void);
^
_configtest.c:30:5: note: 'ldexpf' is a builtin with type 'float (float, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2f' [-Wincompatible-library-redeclaration]
int exp2f (void);
^
_configtest.c:31:5: note: 'exp2f' is a builtin with type 'float (float)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2f' [-Wincompatible-library-redeclaration]
int log2f (void);
^
_configtest.c:32:5: note: 'log2f' is a builtin with type 'float (float)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignf' [-Wincompatible-library-redeclaration]
int copysignf (void);
^
_configtest.c:33:5: note: 'copysignf' is a builtin with type 'float (float, float)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterf' [-Wincompatible-library-redeclaration]
int nextafterf (void);
^
_configtest.c:34:5: note: 'nextafterf' is a builtin with type 'float (float, float)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtf' [-Wincompatible-library-redeclaration]
int cbrtf (void);
^
_configtest.c:35:5: note: 'cbrtf' is a builtin with type 'float (float)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'sinl' [-Wincompatible-library-redeclaration]
int sinl (void);
^
_configtest.c:1:5: note: 'sinl' is a builtin with type 'long double (long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cosl' [-Wincompatible-library-redeclaration]
int cosl (void);
^
_configtest.c:2:5: note: 'cosl' is a builtin with type 'long double (long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'tanl' [-Wincompatible-library-redeclaration]
int tanl (void);
^
_configtest.c:3:5: note: 'tanl' is a builtin with type 'long double (long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'sinhl' [-Wincompatible-library-redeclaration]
int sinhl (void);
^
_configtest.c:4:5: note: 'sinhl' is a builtin with type 'long double (long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'coshl' [-Wincompatible-library-redeclaration]
int coshl (void);
^
_configtest.c:5:5: note: 'coshl' is a builtin with type 'long double (long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'tanhl' [-Wincompatible-library-redeclaration]
int tanhl (void);
^
_configtest.c:6:5: note: 'tanhl' is a builtin with type 'long double (long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'fabsl' [-Wincompatible-library-redeclaration]
int fabsl (void);
^
_configtest.c:7:5: note: 'fabsl' is a builtin with type 'long double (long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'floorl' [-Wincompatible-library-redeclaration]
int floorl (void);
^
_configtest.c:8:5: note: 'floorl' is a builtin with type 'long double (long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceill' [-Wincompatible-library-redeclaration]
int ceill (void);
^
_configtest.c:9:5: note: 'ceill' is a builtin with type 'long double (long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintl' [-Wincompatible-library-redeclaration]
int rintl (void);
^
_configtest.c:10:5: note: 'rintl' is a builtin with type 'long double (long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncl' [-Wincompatible-library-redeclaration]
int truncl (void);
^
_configtest.c:11:5: note: 'truncl' is a builtin with type 'long double (long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtl' [-Wincompatible-library-redeclaration]
int sqrtl (void);
^
_configtest.c:12:5: note: 'sqrtl' is a builtin with type 'long double (long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10l' [-Wincompatible-library-redeclaration]
int log10l (void);
^
_configtest.c:13:5: note: 'log10l' is a builtin with type 'long double (long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logl' [-Wincompatible-library-redeclaration]
int logl (void);
^
_configtest.c:14:5: note: 'logl' is a builtin with type 'long double (long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pl' [-Wincompatible-library-redeclaration]
int log1pl (void);
^
_configtest.c:15:5: note: 'log1pl' is a builtin with type 'long double (long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expl' [-Wincompatible-library-redeclaration]
int expl (void);
^
_configtest.c:16:5: note: 'expl' is a builtin with type 'long double (long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1l' [-Wincompatible-library-redeclaration]
int expm1l (void);
^
_configtest.c:17:5: note: 'expm1l' is a builtin with type 'long double (long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinl' [-Wincompatible-library-redeclaration]
int asinl (void);
^
_configtest.c:18:5: note: 'asinl' is a builtin with type 'long double (long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosl' [-Wincompatible-library-redeclaration]
int acosl (void);
^
_configtest.c:19:5: note: 'acosl' is a builtin with type 'long double (long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanl' [-Wincompatible-library-redeclaration]
int atanl (void);
^
_configtest.c:20:5: note: 'atanl' is a builtin with type 'long double (long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhl' [-Wincompatible-library-redeclaration]
int asinhl (void);
^
_configtest.c:21:5: note: 'asinhl' is a builtin with type 'long double (long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshl' [-Wincompatible-library-redeclaration]
int acoshl (void);
^
_configtest.c:22:5: note: 'acoshl' is a builtin with type 'long double (long double)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhl' [-Wincompatible-library-redeclaration]
int atanhl (void);
^
_configtest.c:23:5: note: 'atanhl' is a builtin with type 'long double (long double)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotl' [-Wincompatible-library-redeclaration]
int hypotl (void);
^
_configtest.c:24:5: note: 'hypotl' is a builtin with type 'long double (long double, long double)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2l' [-Wincompatible-library-redeclaration]
int atan2l (void);
^
_configtest.c:25:5: note: 'atan2l' is a builtin with type 'long double (long double, long double)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powl' [-Wincompatible-library-redeclaration]
int powl (void);
^
_configtest.c:26:5: note: 'powl' is a builtin with type 'long double (long double, long double)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodl' [-Wincompatible-library-redeclaration]
int fmodl (void);
^
_configtest.c:27:5: note: 'fmodl' is a builtin with type 'long double (long double, long double)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modfl' [-Wincompatible-library-redeclaration]
int modfl (void);
^
_configtest.c:28:5: note: 'modfl' is a builtin with type 'long double (long double, long double *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpl' [-Wincompatible-library-redeclaration]
int frexpl (void);
^
_configtest.c:29:5: note: 'frexpl' is a builtin with type 'long double (long double, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpl' [-Wincompatible-library-redeclaration]
int ldexpl (void);
^
_configtest.c:30:5: note: 'ldexpl' is a builtin with type 'long double (long double, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2l' [-Wincompatible-library-redeclaration]
int exp2l (void);
^
_configtest.c:31:5: note: 'exp2l' is a builtin with type 'long double (long double)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2l' [-Wincompatible-library-redeclaration]
int log2l (void);
^
_configtest.c:32:5: note: 'log2l' is a builtin with type 'long double (long double)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignl' [-Wincompatible-library-redeclaration]
int copysignl (void);
^
_configtest.c:33:5: note: 'copysignl' is a builtin with type 'long double (long double, long double)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterl' [-Wincompatible-library-redeclaration]
int nextafterl (void);
^
_configtest.c:34:5: note: 'nextafterl' is a builtin with type 'long double (long double, long double)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtl' [-Wincompatible-library-redeclaration]
int cbrtl (void);
^
_configtest.c:35:5: note: 'cbrtl' is a builtin with type 'long double (long double)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:8:12: error: use of undeclared identifier 'HAVE_DECL_SIGNBIT'
(void) HAVE_DECL_SIGNBIT;
^
1 error generated.
failure.
removing: _configtest.c _configtest.o
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabs' [-Wincompatible-library-redeclaration]
int cabs (void);
^
_configtest.c:1:5: note: 'cabs' is a builtin with type 'double (_Complex double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacos' [-Wincompatible-library-redeclaration]
int cacos (void);
^
_configtest.c:2:5: note: 'cacos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacosh' [-Wincompatible-library-redeclaration]
int cacosh (void);
^
_configtest.c:3:5: note: 'cacosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'carg' [-Wincompatible-library-redeclaration]
int carg (void);
^
_configtest.c:4:5: note: 'carg' is a builtin with type 'double (_Complex double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casin' [-Wincompatible-library-redeclaration]
int casin (void);
^
_configtest.c:5:5: note: 'casin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinh' [-Wincompatible-library-redeclaration]
int casinh (void);
^
_configtest.c:6:5: note: 'casinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catan' [-Wincompatible-library-redeclaration]
int catan (void);
^
_configtest.c:7:5: note: 'catan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanh' [-Wincompatible-library-redeclaration]
int catanh (void);
^
_configtest.c:8:5: note: 'catanh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccos' [-Wincompatible-library-redeclaration]
int ccos (void);
^
_configtest.c:9:5: note: 'ccos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccosh' [-Wincompatible-library-redeclaration]
int ccosh (void);
^
_configtest.c:10:5: note: 'ccosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexp' [-Wincompatible-library-redeclaration]
int cexp (void);
^
_configtest.c:11:5: note: 'cexp' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimag' [-Wincompatible-library-redeclaration]
int cimag (void);
^
_configtest.c:12:5: note: 'cimag' is a builtin with type 'double (_Complex double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clog' [-Wincompatible-library-redeclaration]
int clog (void);
^
_configtest.c:13:5: note: 'clog' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conj' [-Wincompatible-library-redeclaration]
int conj (void);
^
_configtest.c:14:5: note: 'conj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpow' [-Wincompatible-library-redeclaration]
int cpow (void);
^
_configtest.c:15:5: note: 'cpow' is a builtin with type '_Complex double (_Complex double, _Complex double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cproj' [-Wincompatible-library-redeclaration]
int cproj (void);
^
_configtest.c:16:5: note: 'cproj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creal' [-Wincompatible-library-redeclaration]
int creal (void);
^
_configtest.c:17:5: note: 'creal' is a builtin with type 'double (_Complex double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csin' [-Wincompatible-library-redeclaration]
int csin (void);
^
_configtest.c:18:5: note: 'csin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinh' [-Wincompatible-library-redeclaration]
int csinh (void);
^
_configtest.c:19:5: note: 'csinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrt' [-Wincompatible-library-redeclaration]
int csqrt (void);
^
_configtest.c:20:5: note: 'csqrt' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctan' [-Wincompatible-library-redeclaration]
int ctan (void);
^
_configtest.c:21:5: note: 'ctan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanh' [-Wincompatible-library-redeclaration]
int ctanh (void);
^
_configtest.c:22:5: note: 'ctanh' is a builtin with type '_Complex double (_Complex double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsf' [-Wincompatible-library-redeclaration]
int cabsf (void);
^
_configtest.c:1:5: note: 'cabsf' is a builtin with type 'float (_Complex float)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosf' [-Wincompatible-library-redeclaration]
int cacosf (void);
^
_configtest.c:2:5: note: 'cacosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshf' [-Wincompatible-library-redeclaration]
int cacoshf (void);
^
_configtest.c:3:5: note: 'cacoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargf' [-Wincompatible-library-redeclaration]
int cargf (void);
^
_configtest.c:4:5: note: 'cargf' is a builtin with type 'float (_Complex float)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinf' [-Wincompatible-library-redeclaration]
int casinf (void);
^
_configtest.c:5:5: note: 'casinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhf' [-Wincompatible-library-redeclaration]
int casinhf (void);
^
_configtest.c:6:5: note: 'casinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanf' [-Wincompatible-library-redeclaration]
int catanf (void);
^
_configtest.c:7:5: note: 'catanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhf' [-Wincompatible-library-redeclaration]
int catanhf (void);
^
_configtest.c:8:5: note: 'catanhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosf' [-Wincompatible-library-redeclaration]
int ccosf (void);
^
_configtest.c:9:5: note: 'ccosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshf' [-Wincompatible-library-redeclaration]
int ccoshf (void);
^
_configtest.c:10:5: note: 'ccoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpf' [-Wincompatible-library-redeclaration]
int cexpf (void);
^
_configtest.c:11:5: note: 'cexpf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagf' [-Wincompatible-library-redeclaration]
int cimagf (void);
^
_configtest.c:12:5: note: 'cimagf' is a builtin with type 'float (_Complex float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogf' [-Wincompatible-library-redeclaration]
int clogf (void);
^
_configtest.c:13:5: note: 'clogf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjf' [-Wincompatible-library-redeclaration]
int conjf (void);
^
_configtest.c:14:5: note: 'conjf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowf' [-Wincompatible-library-redeclaration]
int cpowf (void);
^
_configtest.c:15:5: note: 'cpowf' is a builtin with type '_Complex float (_Complex float, _Complex float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojf' [-Wincompatible-library-redeclaration]
int cprojf (void);
^
_configtest.c:16:5: note: 'cprojf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'crealf' [-Wincompatible-library-redeclaration]
int crealf (void);
^
_configtest.c:17:5: note: 'crealf' is a builtin with type 'float (_Complex float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinf' [-Wincompatible-library-redeclaration]
int csinf (void);
^
_configtest.c:18:5: note: 'csinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhf' [-Wincompatible-library-redeclaration]
int csinhf (void);
^
_configtest.c:19:5: note: 'csinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtf' [-Wincompatible-library-redeclaration]
int csqrtf (void);
^
_configtest.c:20:5: note: 'csqrtf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanf' [-Wincompatible-library-redeclaration]
int ctanf (void);
^
_configtest.c:21:5: note: 'ctanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhf' [-Wincompatible-library-redeclaration]
int ctanhf (void);
^
_configtest.c:22:5: note: 'ctanhf' is a builtin with type '_Complex float (_Complex float)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsl' [-Wincompatible-library-redeclaration]
int cabsl (void);
^
_configtest.c:1:5: note: 'cabsl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosl' [-Wincompatible-library-redeclaration]
int cacosl (void);
^
_configtest.c:2:5: note: 'cacosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshl' [-Wincompatible-library-redeclaration]
int cacoshl (void);
^
_configtest.c:3:5: note: 'cacoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargl' [-Wincompatible-library-redeclaration]
int cargl (void);
^
_configtest.c:4:5: note: 'cargl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinl' [-Wincompatible-library-redeclaration]
int casinl (void);
^
_configtest.c:5:5: note: 'casinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhl' [-Wincompatible-library-redeclaration]
int casinhl (void);
^
_configtest.c:6:5: note: 'casinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanl' [-Wincompatible-library-redeclaration]
int catanl (void);
^
_configtest.c:7:5: note: 'catanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhl' [-Wincompatible-library-redeclaration]
int catanhl (void);
^
_configtest.c:8:5: note: 'catanhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosl' [-Wincompatible-library-redeclaration]
int ccosl (void);
^
_configtest.c:9:5: note: 'ccosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshl' [-Wincompatible-library-redeclaration]
int ccoshl (void);
^
_configtest.c:10:5: note: 'ccoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpl' [-Wincompatible-library-redeclaration]
int cexpl (void);
^
_configtest.c:11:5: note: 'cexpl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagl' [-Wincompatible-library-redeclaration]
int cimagl (void);
^
_configtest.c:12:5: note: 'cimagl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogl' [-Wincompatible-library-redeclaration]
int clogl (void);
^
_configtest.c:13:5: note: 'clogl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjl' [-Wincompatible-library-redeclaration]
int conjl (void);
^
_configtest.c:14:5: note: 'conjl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]
int cpowl (void);
^
_configtest.c:15:5: note: 'cpowl' is a builtin with type '_Complex long double (_Complex long double, _Complex long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojl' [-Wincompatible-library-redeclaration]
int cprojl (void);
^
_configtest.c:16:5: note: 'cprojl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creall' [-Wincompatible-library-redeclaration]
int creall (void);
^
_configtest.c:17:5: note: 'creall' is a builtin with type 'long double (_Complex long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinl' [-Wincompatible-library-redeclaration]
int csinl (void);
^
_configtest.c:18:5: note: 'csinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhl' [-Wincompatible-library-redeclaration]
int csinhl (void);
^
_configtest.c:19:5: note: 'csinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtl' [-Wincompatible-library-redeclaration]
int csqrtl (void);
^
_configtest.c:20:5: note: 'csqrtl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanl' [-Wincompatible-library-redeclaration]
int ctanl (void);
^
_configtest.c:21:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhl' [-Wincompatible-library-redeclaration]
int ctanhl (void);
^
_configtest.c:22:5: note: 'ctanhl' is a builtin with type '_Complex long double (_Complex long double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:2:12: warning: unused function 'static_func' [-Wunused-function]
static int static_func (char * restrict a)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:3:19: warning: unused function 'static_func' [-Wunused-function]
static inline int static_func (void)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h
#define SIZEOF_PY_INTPTR_T 8
#define SIZEOF_OFF_T 8
#define SIZEOF_PY_LONG_LONG 8
#define MATHLIB
#define HAVE_SIN 1
#define HAVE_COS 1
#define HAVE_TAN 1
#define HAVE_SINH 1
#define HAVE_COSH 1
#define HAVE_TANH 1
#define HAVE_FABS 1
#define HAVE_FLOOR 1
#define HAVE_CEIL 1
#define HAVE_SQRT 1
#define HAVE_LOG10 1
#define HAVE_LOG 1
#define HAVE_EXP 1
#define HAVE_ASIN 1
#define HAVE_ACOS 1
#define HAVE_ATAN 1
#define HAVE_FMOD 1
#define HAVE_MODF 1
#define HAVE_FREXP 1
#define HAVE_LDEXP 1
#define HAVE_RINT 1
#define HAVE_TRUNC 1
#define HAVE_EXP2 1
#define HAVE_LOG2 1
#define HAVE_ATAN2 1
#define HAVE_POW 1
#define HAVE_NEXTAFTER 1
#define HAVE_STRTOLL 1
#define HAVE_STRTOULL 1
#define HAVE_CBRT 1
#define HAVE_STRTOLD_L 1
#define HAVE_BACKTRACE 1
#define HAVE_MADVISE 1
#define HAVE_XMMINTRIN_H 1
#define HAVE_EMMINTRIN_H 1
#define HAVE_XLOCALE_H 1
#define HAVE_DLFCN_H 1
#define HAVE_SYS_MMAN_H 1
#define HAVE___BUILTIN_ISNAN 1
#define HAVE___BUILTIN_ISINF 1
#define HAVE___BUILTIN_ISFINITE 1
#define HAVE___BUILTIN_BSWAP32 1
#define HAVE___BUILTIN_BSWAP64 1
#define HAVE___BUILTIN_EXPECT 1
#define HAVE___BUILTIN_MUL_OVERFLOW 1
#define HAVE___BUILTIN_CPU_SUPPORTS 1
#define HAVE__M_FROM_INT64 1
#define HAVE__MM_LOAD_PS 1
#define HAVE__MM_PREFETCH 1
#define HAVE__MM_LOAD_PD 1
#define HAVE___BUILTIN_PREFETCH 1
#define HAVE_LINK_AVX 1
#define HAVE_LINK_AVX2 1
#define HAVE_XGETBV 1
#define HAVE_ATTRIBUTE_NONNULL 1
#define HAVE_ATTRIBUTE_TARGET_AVX 1
#define HAVE_ATTRIBUTE_TARGET_AVX2 1
#define HAVE___THREAD 1
#define HAVE_SINF 1
#define HAVE_COSF 1
#define HAVE_TANF 1
#define HAVE_SINHF 1
#define HAVE_COSHF 1
#define HAVE_TANHF 1
#define HAVE_FABSF 1
#define HAVE_FLOORF 1
#define HAVE_CEILF 1
#define HAVE_RINTF 1
#define HAVE_TRUNCF 1
#define HAVE_SQRTF 1
#define HAVE_LOG10F 1
#define HAVE_LOGF 1
#define HAVE_LOG1PF 1
#define HAVE_EXPF 1
#define HAVE_EXPM1F 1
#define HAVE_ASINF 1
#define HAVE_ACOSF 1
#define HAVE_ATANF 1
#define HAVE_ASINHF 1
#define HAVE_ACOSHF 1
#define HAVE_ATANHF 1
#define HAVE_HYPOTF 1
#define HAVE_ATAN2F 1
#define HAVE_POWF 1
#define HAVE_FMODF 1
#define HAVE_MODFF 1
#define HAVE_FREXPF 1
#define HAVE_LDEXPF 1
#define HAVE_EXP2F 1
#define HAVE_LOG2F 1
#define HAVE_COPYSIGNF 1
#define HAVE_NEXTAFTERF 1
#define HAVE_CBRTF 1
#define HAVE_SINL 1
#define HAVE_COSL 1
#define HAVE_TANL 1
#define HAVE_SINHL 1
#define HAVE_COSHL 1
#define HAVE_TANHL 1
#define HAVE_FABSL 1
#define HAVE_FLOORL 1
#define HAVE_CEILL 1
#define HAVE_RINTL 1
#define HAVE_TRUNCL 1
#define HAVE_SQRTL 1
#define HAVE_LOG10L 1
#define HAVE_LOGL 1
#define HAVE_LOG1PL 1
#define HAVE_EXPL 1
#define HAVE_EXPM1L 1
#define HAVE_ASINL 1
#define HAVE_ACOSL 1
#define HAVE_ATANL 1
#define HAVE_ASINHL 1
#define HAVE_ACOSHL 1
#define HAVE_ATANHL 1
#define HAVE_HYPOTL 1
#define HAVE_ATAN2L 1
#define HAVE_POWL 1
#define HAVE_FMODL 1
#define HAVE_MODFL 1
#define HAVE_FREXPL 1
#define HAVE_LDEXPL 1
#define HAVE_EXP2L 1
#define HAVE_LOG2L 1
#define HAVE_COPYSIGNL 1
#define HAVE_NEXTAFTERL 1
#define HAVE_CBRTL 1
#define HAVE_DECL_SIGNBIT
#define HAVE_COMPLEX_H 1
#define HAVE_CABS 1
#define HAVE_CACOS 1
#define HAVE_CACOSH 1
#define HAVE_CARG 1
#define HAVE_CASIN 1
#define HAVE_CASINH 1
#define HAVE_CATAN 1
#define HAVE_CATANH 1
#define HAVE_CCOS 1
#define HAVE_CCOSH 1
#define HAVE_CEXP 1
#define HAVE_CIMAG 1
#define HAVE_CLOG 1
#define HAVE_CONJ 1
#define HAVE_CPOW 1
#define HAVE_CPROJ 1
#define HAVE_CREAL 1
#define HAVE_CSIN 1
#define HAVE_CSINH 1
#define HAVE_CSQRT 1
#define HAVE_CTAN 1
#define HAVE_CTANH 1
#define HAVE_CABSF 1
#define HAVE_CACOSF 1
#define HAVE_CACOSHF 1
#define HAVE_CARGF 1
#define HAVE_CASINF 1
#define HAVE_CASINHF 1
#define HAVE_CATANF 1
#define HAVE_CATANHF 1
#define HAVE_CCOSF 1
#define HAVE_CCOSHF 1
#define HAVE_CEXPF 1
#define HAVE_CIMAGF 1
#define HAVE_CLOGF 1
#define HAVE_CONJF 1
#define HAVE_CPOWF 1
#define HAVE_CPROJF 1
#define HAVE_CREALF 1
#define HAVE_CSINF 1
#define HAVE_CSINHF 1
#define HAVE_CSQRTF 1
#define HAVE_CTANF 1
#define HAVE_CTANHF 1
#define HAVE_CABSL 1
#define HAVE_CACOSL 1
#define HAVE_CACOSHL 1
#define HAVE_CARGL 1
#define HAVE_CASINL 1
#define HAVE_CASINHL 1
#define HAVE_CATANL 1
#define HAVE_CATANHL 1
#define HAVE_CCOSL 1
#define HAVE_CCOSHL 1
#define HAVE_CEXPL 1
#define HAVE_CIMAGL 1
#define HAVE_CLOGL 1
#define HAVE_CONJL 1
#define HAVE_CPOWL 1
#define HAVE_CPROJL 1
#define HAVE_CREALL 1
#define HAVE_CSINL 1
#define HAVE_CSINHL 1
#define HAVE_CSQRTL 1
#define HAVE_CTANL 1
#define HAVE_CTANHL 1
#define NPY_RESTRICT restrict
#define NPY_RELAXED_STRIDES_CHECKING 1
#define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE 1
#define NPY_PY3K 1
#ifndef __cplusplus
/* #undef inline */
#endif
#ifndef _NPY_NPY_CONFIG_H_
#error config.h should never be included directly, include npy_config.h instead
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
Generating build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'exp' [-Wincompatible-library-redeclaration]
int exp (void);
^
_configtest.c:1:5: note: 'exp' is a builtin with type 'double (double)'
1 warning generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
#define NPY_SIZEOF_SHORT SIZEOF_SHORT
#define NPY_SIZEOF_INT SIZEOF_INT
#define NPY_SIZEOF_LONG SIZEOF_LONG
#define NPY_SIZEOF_FLOAT 4
#define NPY_SIZEOF_COMPLEX_FLOAT 8
#define NPY_SIZEOF_DOUBLE 8
#define NPY_SIZEOF_COMPLEX_DOUBLE 16
#define NPY_SIZEOF_LONGDOUBLE 16
#define NPY_SIZEOF_COMPLEX_LONGDOUBLE 32
#define NPY_SIZEOF_PY_INTPTR_T 8
#define NPY_SIZEOF_OFF_T 8
#define NPY_SIZEOF_PY_LONG_LONG 8
#define NPY_SIZEOF_LONGLONG 8
#define NPY_NO_SMP 0
#define NPY_HAVE_DECL_ISNAN
#define NPY_HAVE_DECL_ISINF
#define NPY_HAVE_DECL_ISFINITE
#define NPY_HAVE_DECL_SIGNBIT
#define NPY_USE_C99_COMPLEX 1
#define NPY_HAVE_COMPLEX_DOUBLE 1
#define NPY_HAVE_COMPLEX_FLOAT 1
#define NPY_HAVE_COMPLEX_LONG_DOUBLE 1
#define NPY_RELAXED_STRIDES_CHECKING 1
#define NPY_USE_C99_FORMATS 1
#define NPY_VISIBILITY_HIDDEN __attribute__((visibility("hidden")))
#define NPY_ABI_VERSION 0x01000009
#define NPY_API_VERSION 0x0000000D
#ifndef __STDC_FORMAT_MACROS
#define __STDC_FORMAT_MACROS 1
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h']
building extension "numpy.core._multiarray_tests" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
building extension "numpy.core._multiarray_umath" sources
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
executing numpy/core/code_generators/generate_ufunc_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common' to include_dirs.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_internal.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h']
building extension "numpy.core._umath_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
building extension "numpy.core._rational_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
building extension "numpy.core._struct_ufunc_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
building extension "numpy.core._operand_flag_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
building extension "numpy.fft.fftpack_lite" sources
building extension "numpy.linalg.lapack_lite" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/linalg
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
building extension "numpy.linalg._umath_linalg" sources
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
building extension "numpy.random.mtrand" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/random
building data_files sources
build_src: building npy-pkg config files
running build_py
creating build/lib.macosx-10.15-x86_64-3.9
creating build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/conftest.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_globals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/dual.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_distributor_init.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/ctypeslib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/matlib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_pytesttester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying build/src.macosx-10.15-x86_64-3.9/numpy/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
creating build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/py3k.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/_inspect.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
creating build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/fromnumeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_add_newdocs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_methods.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_internal.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_string_helpers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/multiarray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/records.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup_common.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_aliased_types.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/memmap.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/overrides.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/getlimits.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype_ctypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/defchararray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/machar.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/einsumfunc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath_tests.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numerictypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_type_aliases.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/cversions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/arrayprint.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/code_generators/generate_numpy_api.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/unixccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/numpy_distribution.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/conv_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/cpuinfo.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvc9compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/npy_pkg_config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/compat.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/misc_util.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/log.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/line_endings.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/lib2def.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/pathccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/system_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/exec_command.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/from_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/mingw32ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/extension.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/intelccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying build/src.macosx-10.15-x86_64-3.9/numpy/distutils/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config_compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_ext.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_headers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_py.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_src.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/sdist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_scripts.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/bdist_rpm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/autodist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/egg_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/develop.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_data.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/gnu.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/compaq.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/intel.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/none.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/nag.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/ibm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/sun.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/lahey.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/g95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/mips.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/hpux.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/environment.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pathf95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/absoft.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/vast.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
creating build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/misc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/internals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/creation.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/constants.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/ufuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/broadcasting.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/basics.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/subclassing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/indexing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/byteswapping.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/structured_arrays.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/glossary.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
creating build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/common_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/crackfortran.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cb_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py2e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/func2subr.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/diagnose.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/capi_maps.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f90mod_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py_testing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/use_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/auxfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__main__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
creating build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/helper.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/fftpack.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
creating build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_iotools.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/mixins.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/nanfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/recfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/histograms.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/scimath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/user_array.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/format.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/twodim_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/financial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/index_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/npyio.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/stride_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arrayterator.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraysetops.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraypad.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/type_check.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_datasource.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/ufunclike.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/linalg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/extras.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/testutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/bench.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/timer_comparison.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/mrecords.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
creating build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/defmatrix.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/laguerre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/_polybase.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polyutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite_e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/chebyshev.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/legendre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
creating build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/print_coercion_tables.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/parameterized.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
running build_clib
customize UnixCCompiler
customize UnixCCompiler using build_clib
building 'npymath' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
creating build/temp.macosx-10.15-x86_64-3.9/build
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
clang: numpy/core/src/npymath/halffloat.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
ar: adding 4 object files to build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
building 'npysort' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/quicksort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/mergesort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/heapsort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/selection.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/binsearch.c
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
22 warnings generated.
ar: adding 5 object files to build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
running build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'numpy.core._dummy' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/dummymodule.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/dummymodule.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_dummy.cpython-39-darwin.so
building 'numpy.core._multiarray_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
clang: numpy/core/src/common/mem_overlap.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_tests.cpython-39-darwin.so
building 'numpy.core._multiarray_umath' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/private
creating build/temp.macosx-10.15-x86_64-3.9/private/var
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/core/src/multiarray/alloc.c
clang: numpy/core/src/multiarray/calculation.cclang: numpy/core/src/multiarray/array_assign_scalar.c
clang: numpy/core/src/multiarray/convert.c
clang: numpy/core/src/multiarray/ctors.c
clang: numpy/core/src/multiarray/datetime_busday.c
clang: numpy/core/src/multiarray/dragon4.cclang: numpy/core/src/multiarray/flagsobject.c
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/arrayobject.c
clang: numpy/core/src/multiarray/array_assign_array.c
clang: numpy/core/src/multiarray/convert_datatype.c
clang: numpy/core/src/multiarray/getset.c
clang: numpy/core/src/multiarray/datetime_busdaycal.c
clang: numpy/core/src/multiarray/buffer.c
clang: numpy/core/src/multiarray/compiled_base.c
clang: numpy/core/src/multiarray/hashdescr.c
clang: numpy/core/src/multiarray/descriptor.c
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/conversion_utils.c
clang: numpy/core/src/multiarray/item_selection.c
clang: numpy/core/src/multiarray/dtype_transfer.c
clang: numpy/core/src/multiarray/mapping.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
3 warnings generated.
clang: numpy/core/src/multiarray/datetime.c
numpy/core/src/multiarray/arraytypes.c.src:477:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ptr = PyUnicode_AS_UNICODE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/common.c
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
6 warnings generated.
clang: numpy/core/src/multiarray/nditer_pywrap.c
9 warnings generated.
clang: numpy/core/src/multiarray/sequence.c
clang: numpy/core/src/multiarray/shape.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
clang: numpy/core/src/multiarray/methods.c
clang: numpy/core/src/multiarray/iterators.c
clang: numpy/core/src/multiarray/datetime_strings.c
clang: numpy/core/src/multiarray/number.c
clang: numpy/core/src/multiarray/scalarapi.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
numpy/core/src/multiarray/scalarapi.c:74:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:135:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1849:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buffer = PyUnicode_AS_DATA(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
5 warnings generated.
clang: numpy/core/src/multiarray/typeinfo.c
clang: numpy/core/src/multiarray/refcount.c
clang: numpy/core/src/multiarray/usertypes.c
clang: numpy/core/src/multiarray/multiarraymodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
clang: numpy/core/src/multiarray/vdot.c
clang: numpy/core/src/umath/umathmodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
clang: numpy/core/src/umath/reduction.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
clang: numpy/core/src/multiarray/nditer_api.c
14 warnings generated.
clang: numpy/core/src/multiarray/strfuncs.c
numpy/core/src/umath/loops.c.src:655:18: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
result = PyEval_CallObject(tocall, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:178:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_ReprFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:195:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_StrFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
2 warnings generated.
clang: numpy/core/src/multiarray/temp_elide.c
clang: numpy/core/src/umath/cpuid.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
clang: numpy/core/src/umath/ufunc_object.c
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'byte_long' [-Wunused-function]
byte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ubyte_long' [-Wunused-function]
ubyte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'short_long' [-Wunused-function]
short_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ushort_long' [-Wunused-function]
ushort_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'int_long' [-Wunused-function]
int_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'uint_long' [-Wunused-function]
uint_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'long_long' [-Wunused-function]
long_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulong_long' [-Wunused-function]
ulong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longlong_long' [-Wunused-function]
longlong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulonglong_long' [-Wunused-function]
ulonglong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'half_long' [-Wunused-function]
half_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'float_long' [-Wunused-function]
float_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'double_long' [-Wunused-function]
double_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longdouble_long' [-Wunused-function]
longdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cfloat_long' [-Wunused-function]
cfloat_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cdouble_long' [-Wunused-function]
cdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'clongdouble_long' [-Wunused-function]
clongdouble_long(PyObject *obj)
^
clang: numpy/core/src/multiarray/nditer_constr.c
numpy/core/src/umath/ufunc_object.c:657:19: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (i = 0; i < len; i++) {
~ ^ ~~~
clang: numpy/core/src/umath/override.c
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
clang: numpy/core/src/common/mem_overlap.c
clang: numpy/core/src/npymath/halffloat.c
clang: numpy/core/src/common/array_assign.c
clang: numpy/core/src/common/ufunc_override.c
clang: numpy/core/src/common/npy_longdouble.c
clang: numpy/core/src/common/numpyos.c
clang: numpy/core/src/common/ucsnarrow.c
1 warning generated.
clang: numpy/core/src/umath/extobj.c
numpy/core/src/common/ucsnarrow.c:139:34: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf,
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
1 warning generated.
clang: numpy/core/src/common/python_xerbla.c
clang: numpy/core/src/common/cblasfuncs.c
clang: /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c
In file included from /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c:26:
In file included from numpy/core/include/numpy/arrayobject.h:4:
In file included from numpy/core/include/numpy/ndarrayobject.h:21:
build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h:1463:1: warning: unused function '_import_array' [-Wunused-function]
_import_array(void)
^
1 warning generated.
17 warnings generated.
clang: numpy/core/src/umath/ufunc_type_resolution.c
4 warnings generated.
4 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/alloc.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arrayobject.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_scalar.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_array.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/buffer.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/calculation.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/compiled_base.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/common.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert_datatype.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/conversion_utils.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/ctors.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_strings.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busday.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busdaycal.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/descriptor.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dragon4.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dtype_transfer.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/flagsobject.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/getset.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/hashdescr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/item_selection.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/iterators.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/mapping.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/methods.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/multiarraymodule.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_api.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_constr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_pywrap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/number.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/refcount.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/sequence.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/shape.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalarapi.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/strfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/temp_elide.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/typeinfo.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/usertypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/vdot.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/umathmodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/reduction.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_object.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/extobj.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/cpuid.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_type_resolution.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/halffloat.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/array_assign.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/npy_longdouble.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ucsnarrow.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ufunc_override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/numpyos.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/cblasfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/python_xerbla.o build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -lnpysort -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_umath.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.core._umath_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_umath_tests.cpython-39-darwin.so
building 'numpy.core._rational_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_rational_tests.cpython-39-darwin.so
building 'numpy.core._struct_ufunc_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_struct_ufunc_tests.cpython-39-darwin.so
building 'numpy.core._operand_flag_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_operand_flag_tests.cpython-39-darwin.so
building 'numpy.fft.fftpack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/fft
compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/fft/fftpack_litemodule.c
clang: numpy/fft/fftpack.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_lite.cpython-39-darwin.so
building 'numpy.linalg.lapack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/linalg/lapack_litemodule.c
clang: numpy/linalg/lapack_lite/python_xerbla.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.linalg._umath_linalg' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
numpy/linalg/umath_linalg.c.src:735:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option]
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
^
numpy/linalg/umath_linalg.c.src:541:1: warning: unused function 'dump_ufunc_object' [-Wunused-function]
dump_ufunc_object(PyUFuncObject* ufunc)
^
numpy/linalg/umath_linalg.c.src:566:1: warning: unused function 'dump_linearize_data' [-Wunused-function]
dump_linearize_data(const char* name, const LINEARIZE_DATA_t* params)
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_FLOAT_matrix' [-Wunused-function]
dump_FLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_DOUBLE_matrix' [-Wunused-function]
dump_DOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CFLOAT_matrix' [-Wunused-function]
dump_CFLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CDOUBLE_matrix' [-Wunused-function]
dump_CDOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_FLOAT_matrix' [-Wunused-function]
zero_FLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_DOUBLE_matrix' [-Wunused-function]
zero_DOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CFLOAT_matrix' [-Wunused-function]
zero_CFLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CDOUBLE_matrix' [-Wunused-function]
zero_CDOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:1862:1: warning: unused function 'dump_geev_params' [-Wunused-function]
dump_geev_params(const char *name, GEEV_PARAMS_t* params)
^
numpy/linalg/umath_linalg.c.src:2132:1: warning: unused function 'init_cgeev' [-Wunused-function]
init_cgeev(GEEV_PARAMS_t* params,
^
numpy/linalg/umath_linalg.c.src:2213:1: warning: unused function 'process_cgeev_results' [-Wunused-function]
process_cgeev_results(GEEV_PARAMS_t *NPY_UNUSED(params))
^
numpy/linalg/umath_linalg.c.src:2376:1: warning: unused function 'dump_gesdd_params' [-Wunused-function]
dump_gesdd_params(const char *name,
^
numpy/linalg/umath_linalg.c.src:2864:1: warning: unused function 'dump_gelsd_params' [-Wunused-function]
dump_gelsd_params(const char *name,
^
16 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/_umath_linalg.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.random.mtrand' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand
compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/random/mtrand/mtrand.c
clang: numpy/random/mtrand/initarray.cclang: numpy/random/mtrand/randomkit.c
clang: numpy/random/mtrand/distributions.c
numpy/random/mtrand/mtrand.c:40400:34: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6mtrand_RandomState.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
12 warnings and 1 error generated.
error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o.d" failed with exit status 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1696/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1695/comments | https://api.github.com/repos/huggingface/datasets/issues/1695/events | https://github.com/huggingface/datasets/pull/1695 | 780,971,987 | MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4 | 1,695 | fix ner_tag bugs in thainer | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,985,553,000 | 1,610,030,625,000 | 1,610,030,608,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1695",
"html_url": "https://github.com/huggingface/datasets/pull/1695",
"diff_url": "https://github.com/huggingface/datasets/pull/1695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1695.patch",
"merged_at": 1610030608000
} | fix bug that results in `ner_tag` always equal to 'O'. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1695/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1694/comments | https://api.github.com/repos/huggingface/datasets/issues/1694/events | https://github.com/huggingface/datasets/pull/1694 | 780,429,080 | MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx | 1,694 | Add OSCAR | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,928,468,000 | 1,611,565,833,000 | 1,611,565,832,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1694",
"html_url": "https://github.com/huggingface/datasets/pull/1694",
"diff_url": "https://github.com/huggingface/datasets/pull/1694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1694.patch",
"merged_at": 1611565832000
} | Continuation of #348
The files have been moved to S3 and only the unshuffled version is available.
Both original and deduplicated versions of each language are available.
Example of usage:
```python
from datasets import load_dataset
oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="train")
oscar_orig_fr = load_dataset("oscar", "unshuffled_original_fr", split="train")
```
cc @pjox @jonatasgrosman
-------------
To make the metadata generation work in parallel I did a few changes in the `datasets-cli test` command to add the `num_proc` and `proc_rank` arguments. This way you can run multiple processes for the metadata computation.
```
datasets-cli test ./datasets/oscar --save_infos --all_configs --num_proc 4 --proc_rank 0 --clear_cache --cache_dir tmp0
```
-------------
ToDo: add the dummy_data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1694/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1694/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1693/comments | https://api.github.com/repos/huggingface/datasets/issues/1693/events | https://github.com/huggingface/datasets/pull/1693 | 780,268,595 | MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx | 1,693 | Fix reuters metadata parsing errors | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,921,563,000 | 1,610,063,627,000 | 1,610,028,082,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"merged_at": 1610028082000
} | Was missing the last entry in each metadata category | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1693/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1691/comments | https://api.github.com/repos/huggingface/datasets/issues/1691/events | https://github.com/huggingface/datasets/pull/1691 | 779,882,271 | MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0 | 1,691 | Updated HuggingFace Datasets README (fix typos) | {
"login": "8bitmp3",
"id": 19637339,
"node_id": "MDQ6VXNlcjE5NjM3MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/19637339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/8bitmp3",
"html_url": "https://github.com/8bitmp3",
"followers_url": "https://api.github.com/users/8bitmp3/followers",
"following_url": "https://api.github.com/users/8bitmp3/following{/other_user}",
"gists_url": "https://api.github.com/users/8bitmp3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/8bitmp3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/8bitmp3/subscriptions",
"organizations_url": "https://api.github.com/users/8bitmp3/orgs",
"repos_url": "https://api.github.com/users/8bitmp3/repos",
"events_url": "https://api.github.com/users/8bitmp3/events{/privacy}",
"received_events_url": "https://api.github.com/users/8bitmp3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,899,278,000 | 1,610,839,847,000 | 1,610,013,992,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1691",
"html_url": "https://github.com/huggingface/datasets/pull/1691",
"diff_url": "https://github.com/huggingface/datasets/pull/1691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1691.patch",
"merged_at": 1610013992000
} | Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps.

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1691/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1691/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1690/comments | https://api.github.com/repos/huggingface/datasets/issues/1690/events | https://github.com/huggingface/datasets/pull/1690 | 779,441,631 | MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw | 1,690 | Fast start up | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,873,673,000 | 1,609,942,859,000 | 1,609,942,858,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1690",
"html_url": "https://github.com/huggingface/datasets/pull/1690",
"diff_url": "https://github.com/huggingface/datasets/pull/1690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1690.patch",
"merged_at": 1609942858000
} | Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies.
To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is being imported. On my side it changed the import time of `datasets` from 5sec to 0.5sec, which is enjoyable.
To be able to check if optional dependencies are available without importing them I'm using `importlib_metadata`, which is part of the standard lib in python>=3.8 and was backported. The difference with `importlib` is that it also enables to get the versions of the libraries without importing them.
I added this dependency in `setup.py`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1690/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1690/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1689/comments | https://api.github.com/repos/huggingface/datasets/issues/1689/events | https://github.com/huggingface/datasets/pull/1689 | 779,107,313 | MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw | 1,689 | Fix ade_corpus_v2 config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,857,208,000 | 1,609,858,509,000 | 1,609,858,508,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1689",
"html_url": "https://github.com/huggingface/datasets/pull/1689",
"diff_url": "https://github.com/huggingface/datasets/pull/1689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1689.patch",
"merged_at": 1609858508000
} | There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them:
- Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification
- Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation
- Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1689/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1688/comments | https://api.github.com/repos/huggingface/datasets/issues/1688/events | https://github.com/huggingface/datasets/pull/1688 | 779,029,685 | MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0 | 1,688 | Fix DaNE last example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,853,377,000 | 1,609,855,215,000 | 1,609,855,213,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1688",
"html_url": "https://github.com/huggingface/datasets/pull/1688",
"diff_url": "https://github.com/huggingface/datasets/pull/1688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1688.patch",
"merged_at": 1609855213000
} | The last example from the DaNE dataset is empty.
Fix #1686 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1688/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1687/comments | https://api.github.com/repos/huggingface/datasets/issues/1687/events | https://github.com/huggingface/datasets/issues/1687 | 779,004,894 | MDU6SXNzdWU3NzkwMDQ4OTQ= | 1,687 | Question: Shouldn't .info be a part of DatasetDict? | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,852,121,000 | 1,610,014,686,000 | null | CONTRIBUTOR | null | null | null | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'DatasetDict' object has no attribute 'info'
```
I could imagine that this wouldn't work for datasets dicts which hold entirely different datasets (multimodal datasets), but it seems odd that splits of the same dataset is treated the same as what is essentially different datasets.
Intuitively it would also make sense that if a dataset is supplied via. the load_dataset that is have a common .info which covers the entire dataset.
It is entirely possible that I am missing another perspective | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1687/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1686/comments | https://api.github.com/repos/huggingface/datasets/issues/1686/events | https://github.com/huggingface/datasets/issues/1686 | 778,921,684 | MDU6SXNzdWU3Nzg5MjE2ODQ= | 1,686 | Dataset Error: DaNE contains empty samples at the end | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,847,666,000 | 1,609,855,269,000 | 1,609,855,213,000 | CONTRIBUTOR | null | null | null | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
>>> dataset["train"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
```
Best,
Kenneth | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1686/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1685/comments | https://api.github.com/repos/huggingface/datasets/issues/1685/events | https://github.com/huggingface/datasets/pull/1685 | 778,914,431 | MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2 | 1,685 | Update README.md of covid-tweets-japanese | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,847,247,000 | 1,609,928,832,000 | 1,609,925,470,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1685",
"html_url": "https://github.com/huggingface/datasets/pull/1685",
"diff_url": "https://github.com/huggingface/datasets/pull/1685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1685.patch",
"merged_at": 1609925470000
} | Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402.
- Update "Data Splits" to be more precise that no information is provided for now.
- old: [More Information Needed]
- new: No information about data splits is provided for now.
- The automatic generation of links seemed not working properly, so I added a space before and after the URL to make the links work correctly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1685/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1684/comments | https://api.github.com/repos/huggingface/datasets/issues/1684/events | https://github.com/huggingface/datasets/pull/1684 | 778,356,196 | MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1 | 1,684 | Add CANER Corpus | {
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,793,351,000 | 1,611,565,760,000 | 1,611,565,760,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1684",
"html_url": "https://github.com/huggingface/datasets/pull/1684",
"diff_url": "https://github.com/huggingface/datasets/pull/1684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1684.patch",
"merged_at": 1611565760000
} | What does this PR do?
Adds the following dataset:
https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus
Who can review?
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1684/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1683/comments | https://api.github.com/repos/huggingface/datasets/issues/1683/events | https://github.com/huggingface/datasets/issues/1683 | 778,287,612 | MDU6SXNzdWU3NzgyODc2MTI= | 1,683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | {
"login": "abarbosa94",
"id": 6608232,
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abarbosa94",
"html_url": "https://github.com/abarbosa94",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,786,073,000 | 1,609,787,085,000 | 1,609,787,085,000 | CONTRIBUTOR | null | null | null | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", cache_dir="../datasets/")
ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(
"facebook/dpr-ctx_encoder-single-nq-base",
cache_dir="..datasets/"
)
dataset = load_dataset('text',
data_files='data/raw/ARC_Corpus.txt',
cache_dir='../datasets')
torch.set_grad_enabled(False)
ds_with_embeddings = dataset.map(
lambda example: {
'embeddings': ctx_encoder(
**ctx_tokenizer(
example["text"],
padding='max_length',
truncation=True,
max_length=MAX_SEQ_LENGTH,
return_tensors="pt"
)
)[0][0].numpy(),
},
batched=True,
load_from_cache_file=False,
batch_size=1000
)
```
ARC Corpus can be obtained from [here](https://ai2-datasets.s3-us-west-2.amazonaws.com/arc/ARC-V1-Feb2018.zip)
And then the error:
```
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-67d139bb2ed3> in <module>
14 batched=True,
15 load_from_cache_file=False,
---> 16 batch_size=1000
17 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1257 fn_kwargs=fn_kwargs,
1258 new_fingerprint=new_fingerprint,
-> 1259 update_data=update_data,
1260 )
1261 else:
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
155 }
156 # apply actual function
--> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
159 # re-apply format to the output
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1526 if update_data:
1527 batch = cast_to_python_objects(batch)
-> 1528 writer.write_batch(batch)
1529 if update_data:
1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
277 typed_sequence_examples[col] = typed_sequence
--> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples)
279 self.write_table(pa_table)
280
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 1 named text expected length 768 but got length 1000
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1683/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1682/comments | https://api.github.com/repos/huggingface/datasets/issues/1682/events | https://github.com/huggingface/datasets/pull/1682 | 778,268,156 | MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1 | 1,682 | Don't use xlrd for xlsx files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,783,910,000 | 1,609,783,994,000 | 1,609,783,993,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1682",
"html_url": "https://github.com/huggingface/datasets/pull/1682",
"diff_url": "https://github.com/huggingface/datasets/pull/1682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1682.patch",
"merged_at": 1609783993000
} | Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.
Therefore we needed to use something else.
A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.
I left the unused import of `openpyxl` in the dataset scripts to show users that this is a required dependency to use the scripts.
I tested the different datasets using `datasets-cli test` and the tests are successful (no missing examples). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1682/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1681/comments | https://api.github.com/repos/huggingface/datasets/issues/1681/events | https://github.com/huggingface/datasets/issues/1681 | 777,644,163 | MDU6SXNzdWU3Nzc2NDQxNjM= | 1,681 | Dataset "dane" missing | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,682,583,000 | 1,609,835,735,000 | 1,609,835,713,000 | CONTRIBUTOR | null | null | null | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load_dataset("dane")
Traceback (most recent call last):
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path
output_path = get_from_cache(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path
output_path = get_from_cache(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 588, in load_dataset
module_path, hash = prepare_module(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 280, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at dane/dane.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py
```
This issue might be relevant to @ophelielacroix from the Alexandra Institut whom created the data. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1681/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1680/comments | https://api.github.com/repos/huggingface/datasets/issues/1680/events | https://github.com/huggingface/datasets/pull/1680 | 777,623,053 | MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw | 1,680 | added TurkishProductReviews dataset | {
"login": "basakbuluz",
"id": 41359672,
"node_id": "MDQ6VXNlcjQxMzU5Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/41359672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/basakbuluz",
"html_url": "https://github.com/basakbuluz",
"followers_url": "https://api.github.com/users/basakbuluz/followers",
"following_url": "https://api.github.com/users/basakbuluz/following{/other_user}",
"gists_url": "https://api.github.com/users/basakbuluz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/basakbuluz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/basakbuluz/subscriptions",
"organizations_url": "https://api.github.com/users/basakbuluz/orgs",
"repos_url": "https://api.github.com/users/basakbuluz/repos",
"events_url": "https://api.github.com/users/basakbuluz/events{/privacy}",
"received_events_url": "https://api.github.com/users/basakbuluz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,674,779,000 | 1,609,784,135,000 | 1,609,784,135,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1680",
"html_url": "https://github.com/huggingface/datasets/pull/1680",
"diff_url": "https://github.com/huggingface/datasets/pull/1680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1680.patch",
"merged_at": 1609784135000
} | This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**.
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** Fatih Barmanbay - @fthbrmnby | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1680/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1679/comments | https://api.github.com/repos/huggingface/datasets/issues/1679/events | https://github.com/huggingface/datasets/issues/1679 | 777,587,792 | MDU6SXNzdWU3Nzc1ODc3OTI= | 1,679 | Can't import cc100 dataset | {
"login": "alighofrani95",
"id": 14968123,
"node_id": "MDQ6VXNlcjE0OTY4MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alighofrani95",
"html_url": "https://github.com/alighofrani95",
"followers_url": "https://api.github.com/users/alighofrani95/followers",
"following_url": "https://api.github.com/users/alighofrani95/following{/other_user}",
"gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions",
"organizations_url": "https://api.github.com/users/alighofrani95/orgs",
"repos_url": "https://api.github.com/users/alighofrani95/repos",
"events_url": "https://api.github.com/users/alighofrani95/events{/privacy}",
"received_events_url": "https://api.github.com/users/alighofrani95/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,657,976,000 | 1,609,785,698,000 | null | NONE | null | null | null | There is some issue to import cc100 dataset.
```
from datasets import load_dataset
dataset = load_dataset("cc100")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/cc100/cc100.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
280 raise FileNotFoundError(
281 "Couldn't find file locally at {}, or remotely at {} or {}".format(
--> 282 combined_path, github_file_path, file_path
283 )
284 )
FileNotFoundError: Couldn't find file locally at cc100/cc100.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/cc100/cc100.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1679/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1678/comments | https://api.github.com/repos/huggingface/datasets/issues/1678/events | https://github.com/huggingface/datasets/pull/1678 | 777,567,920 | MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy | 1,678 | Switchboard Dialog Act Corpus added under `datasets/swda` | {
"login": "gmihaila",
"id": 22454783,
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmihaila",
"html_url": "https://github.com/gmihaila",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions",
"organizations_url": "https://api.github.com/users/gmihaila/orgs",
"repos_url": "https://api.github.com/users/gmihaila/repos",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmihaila/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,646,021,000 | 1,610,129,361,000 | 1,609,841,195,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1678",
"html_url": "https://github.com/huggingface/datasets/pull/1678",
"diff_url": "https://github.com/huggingface/datasets/pull/1678.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1678.patch",
"merged_at": 1609841195000
} | Switchboard Dialog Act Corpus
Intro:
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
Details:
[homepage](http://compprag.christopherpotts.net/swda.html)
[repo](https://github.com/NathanDuran/Switchboard-Corpus/raw/master/swda_data/)
I believe this is an important dataset to have since there is no dataset related to dialogue act added.
I didn't find any formatting for pull request. I hope all this information is enough.
For any support please contact me. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1678/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1677/comments | https://api.github.com/repos/huggingface/datasets/issues/1677/events | https://github.com/huggingface/datasets/pull/1677 | 777,553,383 | MDExOlB1bGxSZXF1ZXN0NTQ3ODE3ODI1 | 1,677 | Switchboard Dialog Act Corpus added under `datasets/swda` | {
"login": "gmihaila",
"id": 22454783,
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmihaila",
"html_url": "https://github.com/gmihaila",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions",
"organizations_url": "https://api.github.com/users/gmihaila/orgs",
"repos_url": "https://api.github.com/users/gmihaila/repos",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmihaila/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,636,602,000 | 1,609,642,557,000 | 1,609,642,556,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1677",
"html_url": "https://github.com/huggingface/datasets/pull/1677",
"diff_url": "https://github.com/huggingface/datasets/pull/1677.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1677.patch",
"merged_at": null
} | Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**.
I think this is an important datasets to be added since it is the only one related to dialogue act classification.
Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form.
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
[webpage](http://compprag.christopherpotts.net/swda.html)
[repo](https://github.com/NathanDuran/Switchboard-Corpus/raw/master/swda_data/)
Please contact me for any support!
All tests passed and followed all steps in the contribution guide!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1677/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1676/comments | https://api.github.com/repos/huggingface/datasets/issues/1676/events | https://github.com/huggingface/datasets/pull/1676 | 777,477,645 | MDExOlB1bGxSZXF1ZXN0NTQ3NzY1OTY3 | 1,676 | new version of Ted Talks IWSLT (WIT3) | {
"login": "skyprince999",
"id": 9033954,
"node_id": "MDQ6VXNlcjkwMzM5NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skyprince999",
"html_url": "https://github.com/skyprince999",
"followers_url": "https://api.github.com/users/skyprince999/followers",
"following_url": "https://api.github.com/users/skyprince999/following{/other_user}",
"gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions",
"organizations_url": "https://api.github.com/users/skyprince999/orgs",
"repos_url": "https://api.github.com/users/skyprince999/repos",
"events_url": "https://api.github.com/users/skyprince999/events{/privacy}",
"received_events_url": "https://api.github.com/users/skyprince999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,601,403,000 | 1,610,619,019,000 | 1,610,619,019,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1676",
"html_url": "https://github.com/huggingface/datasets/pull/1676",
"diff_url": "https://github.com/huggingface/datasets/pull/1676.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1676.patch",
"merged_at": 1610619019000
} | In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!!
Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually.
Locally I was able to clear the `python datasets-cli test datasets/......` . Which created the `dataset_info.json` file . The test for the dummy files was also cleared. However couldn't figure out how to specify the local data folder for the real dataset
**Note: that this requires manual download of the dataset.**
**Note2: The high number of _Files changed (112)_ is because of the large number of dummy files/configs!** | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1676/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1675/comments | https://api.github.com/repos/huggingface/datasets/issues/1675/events | https://github.com/huggingface/datasets/issues/1675 | 777,367,320 | MDU6SXNzdWU3NzczNjczMjA= | 1,675 | Add the 800GB Pile dataset? | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,541,892,000 | 1,638,372,547,000 | 1,638,372,547,000 | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:** https://pile.eleuther.ai/paper.pdf
- **Data:** https://pile.eleuther.ai/
- **Motivation:** Enables hardcore (GPT-3 scale!) language modelling
## Remarks
Given the extreme size of this dataset, I'm not sure how feasible this will be to include in `datasets` 🤯 . I'm also unsure how many `datasets` users are pretraining LMs, so the usage of this dataset may not warrant the effort to integrate it.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1675/reactions",
"total_count": 12,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 5,
"eyes": 2
} | https://api.github.com/repos/huggingface/datasets/issues/1675/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1674/comments | https://api.github.com/repos/huggingface/datasets/issues/1674/events | https://github.com/huggingface/datasets/issues/1674 | 777,321,840 | MDU6SXNzdWU3NzczMjE4NDA= | 1,674 | dutch_social can't be loaded | {
"login": "koenvandenberge",
"id": 10134844,
"node_id": "MDQ6VXNlcjEwMTM0ODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/10134844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koenvandenberge",
"html_url": "https://github.com/koenvandenberge",
"followers_url": "https://api.github.com/users/koenvandenberge/followers",
"following_url": "https://api.github.com/users/koenvandenberge/following{/other_user}",
"gists_url": "https://api.github.com/users/koenvandenberge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koenvandenberge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koenvandenberge/subscriptions",
"organizations_url": "https://api.github.com/users/koenvandenberge/orgs",
"repos_url": "https://api.github.com/users/koenvandenberge/repos",
"events_url": "https://api.github.com/users/koenvandenberge/events{/privacy}",
"received_events_url": "https://api.github.com/users/koenvandenberge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,609,522,628,000 | 1,609,841,821,000 | null | NONE | null | null | null | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koenvandenberge$ python
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
dataset = load_dataset(
'dutch_social')
>>> dataset = load_dataset(
... 'dutch_social')
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at dutch_social/dutch_social.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1674/timeline | null | false |
Subsets and Splits