problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14223 | rasdani/github-patches | git_diff | ibis-project__ibis-2556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CLN: Remove or consolidate dev dependencies from setup.py and environment.yml
I noticed in https://github.com/ibis-project/ibis/pull/2547#issue-529169508 that the dev dependencies are not in sync in https://github.com/ibis-project/ibis/blob/master/setup.py#L63 and https://github.com/ibis-project/ibis/blob/master/environment.yml#L24
`environment.yml` looks more up to date; the dev dependencies in `setup.py` should either be synced with that file or just removed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """Ibis setup module."""
3 import pathlib
4 import sys
5
6 from setuptools import find_packages, setup
7
8 import versioneer
9
10 LONG_DESCRIPTION = """
11 Ibis is a productivity-centric Python big data framework.
12
13 See http://ibis-project.org
14 """
15
16 VERSION = sys.version_info.major, sys.version_info.minor
17
18 impala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']
19 impala_requires.append('impyla[kerberos]>=0.15.0')
20
21 sqlite_requires = ['sqlalchemy>=1.1,<1.3.7']
22 postgres_requires = sqlite_requires + ['psycopg2']
23 mysql_requires = sqlite_requires + ['pymysql']
24
25 omniscidb_requires = ['pymapd==0.24', 'pyarrow']
26 kerberos_requires = ['requests-kerberos']
27 visualization_requires = ['graphviz']
28 clickhouse_requires = [
29 'clickhouse-driver>=0.1.3',
30 'clickhouse-cityhash',
31 ]
32 bigquery_requires = [
33 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',
34 'pydata-google-auth',
35 ]
36 hdf5_requires = ['tables>=3.0.0']
37
38 parquet_requires = ['pyarrow>=0.12.0']
39 spark_requires = ['pyspark>=2.4.3']
40
41 geospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']
42
43 dask_requires = [
44 'dask[dataframe, array]',
45 ]
46
47 all_requires = (
48 impala_requires
49 + postgres_requires
50 + omniscidb_requires
51 + mysql_requires
52 + kerberos_requires
53 + visualization_requires
54 + clickhouse_requires
55 + bigquery_requires
56 + hdf5_requires
57 + parquet_requires
58 + spark_requires
59 + geospatial_requires
60 + dask_requires
61 )
62
63 develop_requires = all_requires + [
64 'black',
65 'click',
66 'pydocstyle==4.0.1',
67 'flake8',
68 'isort',
69 'mypy',
70 'pre-commit',
71 'pygit2',
72 'pytest>=4.5',
73 ]
74
75 install_requires = [
76 line.strip()
77 for line in pathlib.Path(__file__)
78 .parent.joinpath('requirements.txt')
79 .read_text()
80 .splitlines()
81 ]
82
83 setup(
84 name='ibis-framework',
85 url='https://github.com/ibis-project/ibis',
86 packages=find_packages(),
87 version=versioneer.get_version(),
88 cmdclass=versioneer.get_cmdclass(),
89 install_requires=install_requires,
90 python_requires='>=3.7',
91 extras_require={
92 'all': all_requires,
93 'develop': develop_requires,
94 'impala': impala_requires,
95 'kerberos': kerberos_requires,
96 'postgres': postgres_requires,
97 'omniscidb': omniscidb_requires,
98 'mysql': mysql_requires,
99 'sqlite': sqlite_requires,
100 'visualization': visualization_requires,
101 'clickhouse': clickhouse_requires,
102 'bigquery': bigquery_requires,
103 'hdf5': hdf5_requires,
104 'parquet': parquet_requires,
105 'spark': spark_requires,
106 'geospatial': geospatial_requires,
107 'dask': dask_requires,
108 },
109 description="Productivity-centric Python Big Data Framework",
110 long_description=LONG_DESCRIPTION,
111 classifiers=[
112 'Development Status :: 4 - Beta',
113 'Operating System :: OS Independent',
114 'Intended Audience :: Science/Research',
115 'Programming Language :: Python',
116 'Programming Language :: Python :: 3',
117 'Topic :: Scientific/Engineering',
118 ],
119 license='Apache License, Version 2.0',
120 maintainer="Phillip Cloud",
121 maintainer_email="[email protected]",
122 )
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,18 +60,6 @@
+ dask_requires
)
-develop_requires = all_requires + [
- 'black',
- 'click',
- 'pydocstyle==4.0.1',
- 'flake8',
- 'isort',
- 'mypy',
- 'pre-commit',
- 'pygit2',
- 'pytest>=4.5',
-]
-
install_requires = [
line.strip()
for line in pathlib.Path(__file__)
@@ -90,7 +78,6 @@
python_requires='>=3.7',
extras_require={
'all': all_requires,
- 'develop': develop_requires,
'impala': impala_requires,
'kerberos': kerberos_requires,
'postgres': postgres_requires,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,18 +60,6 @@\n + dask_requires\n )\n \n-develop_requires = all_requires + [\n- 'black',\n- 'click',\n- 'pydocstyle==4.0.1',\n- 'flake8',\n- 'isort',\n- 'mypy',\n- 'pre-commit',\n- 'pygit2',\n- 'pytest>=4.5',\n-]\n-\n install_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n@@ -90,7 +78,6 @@\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n- 'develop': develop_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n", "issue": "CLN: Remove or consolidate dev dependencies from setup.py and environment.yml\nI noticed in https://github.com/ibis-project/ibis/pull/2547#issue-529169508 that the dev dependencies are not in sync in https://github.com/ibis-project/ibis/blob/master/setup.py#L63 and https://github.com/ibis-project/ibis/blob/master/environment.yml#L24\r\n\r\n`environment.yml` looks more up to date; the dev dependencies in `setup.py` should either be synced with that file or just removed.\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.version_info.major, sys.version_info.minor\n\nimpala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']\nimpala_requires.append('impyla[kerberos]>=0.15.0')\n\nsqlite_requires = ['sqlalchemy>=1.1,<1.3.7']\npostgres_requires = sqlite_requires + ['psycopg2']\nmysql_requires = sqlite_requires + ['pymysql']\n\nomniscidb_requires = ['pymapd==0.24', 'pyarrow']\nkerberos_requires = ['requests-kerberos']\nvisualization_requires = ['graphviz']\nclickhouse_requires = [\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n]\nbigquery_requires = [\n 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',\n 'pydata-google-auth',\n]\nhdf5_requires = ['tables>=3.0.0']\n\nparquet_requires = ['pyarrow>=0.12.0']\nspark_requires = ['pyspark>=2.4.3']\n\ngeospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']\n\ndask_requires = [\n 'dask[dataframe, array]',\n]\n\nall_requires = (\n impala_requires\n + postgres_requires\n + omniscidb_requires\n + mysql_requires\n + kerberos_requires\n + visualization_requires\n + clickhouse_requires\n + bigquery_requires\n + hdf5_requires\n + parquet_requires\n + spark_requires\n + geospatial_requires\n + dask_requires\n)\n\ndevelop_requires = all_requires + [\n 'black',\n 'click',\n 'pydocstyle==4.0.1',\n 'flake8',\n 'isort',\n 'mypy',\n 'pre-commit',\n 'pygit2',\n 'pytest>=4.5',\n]\n\ninstall_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n .parent.joinpath('requirements.txt')\n .read_text()\n .splitlines()\n]\n\nsetup(\n name='ibis-framework',\n url='https://github.com/ibis-project/ibis',\n packages=find_packages(),\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n install_requires=install_requires,\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n 'develop': develop_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n 'omniscidb': omniscidb_requires,\n 'mysql': mysql_requires,\n 'sqlite': sqlite_requires,\n 'visualization': visualization_requires,\n 'clickhouse': clickhouse_requires,\n 'bigquery': bigquery_requires,\n 'hdf5': hdf5_requires,\n 'parquet': parquet_requires,\n 'spark': spark_requires,\n 'geospatial': geospatial_requires,\n 'dask': dask_requires,\n },\n description=\"Productivity-centric Python Big Data Framework\",\n long_description=LONG_DESCRIPTION,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n ],\n license='Apache License, Version 2.0',\n maintainer=\"Phillip Cloud\",\n maintainer_email=\"[email protected]\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"Ibis setup module.\"\"\"\nimport pathlib\nimport sys\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"\nIbis is a productivity-centric Python big data framework.\n\nSee http://ibis-project.org\n\"\"\"\n\nVERSION = sys.version_info.major, sys.version_info.minor\n\nimpala_requires = ['hdfs>=2.0.16', 'sqlalchemy>=1.1,<1.3.7', 'requests']\nimpala_requires.append('impyla[kerberos]>=0.15.0')\n\nsqlite_requires = ['sqlalchemy>=1.1,<1.3.7']\npostgres_requires = sqlite_requires + ['psycopg2']\nmysql_requires = sqlite_requires + ['pymysql']\n\nomniscidb_requires = ['pymapd==0.24', 'pyarrow']\nkerberos_requires = ['requests-kerberos']\nvisualization_requires = ['graphviz']\nclickhouse_requires = [\n 'clickhouse-driver>=0.1.3',\n 'clickhouse-cityhash',\n]\nbigquery_requires = [\n 'google-cloud-bigquery[bqstorage,pandas]>=1.12.0,<2.0.0dev',\n 'pydata-google-auth',\n]\nhdf5_requires = ['tables>=3.0.0']\n\nparquet_requires = ['pyarrow>=0.12.0']\nspark_requires = ['pyspark>=2.4.3']\n\ngeospatial_requires = ['geoalchemy2', 'geopandas', 'shapely']\n\ndask_requires = [\n 'dask[dataframe, array]',\n]\n\nall_requires = (\n impala_requires\n + postgres_requires\n + omniscidb_requires\n + mysql_requires\n + kerberos_requires\n + visualization_requires\n + clickhouse_requires\n + bigquery_requires\n + hdf5_requires\n + parquet_requires\n + spark_requires\n + geospatial_requires\n + dask_requires\n)\n\ninstall_requires = [\n line.strip()\n for line in pathlib.Path(__file__)\n .parent.joinpath('requirements.txt')\n .read_text()\n .splitlines()\n]\n\nsetup(\n name='ibis-framework',\n url='https://github.com/ibis-project/ibis',\n packages=find_packages(),\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n install_requires=install_requires,\n python_requires='>=3.7',\n extras_require={\n 'all': all_requires,\n 'impala': impala_requires,\n 'kerberos': kerberos_requires,\n 'postgres': postgres_requires,\n 'omniscidb': omniscidb_requires,\n 'mysql': mysql_requires,\n 'sqlite': sqlite_requires,\n 'visualization': visualization_requires,\n 'clickhouse': clickhouse_requires,\n 'bigquery': bigquery_requires,\n 'hdf5': hdf5_requires,\n 'parquet': parquet_requires,\n 'spark': spark_requires,\n 'geospatial': geospatial_requires,\n 'dask': dask_requires,\n },\n description=\"Productivity-centric Python Big Data Framework\",\n long_description=LONG_DESCRIPTION,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering',\n ],\n license='Apache License, Version 2.0',\n maintainer=\"Phillip Cloud\",\n maintainer_email=\"[email protected]\",\n)\n", "path": "setup.py"}]} | 1,494 | 196 |
gh_patches_debug_16105 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase width of algorithm result table
The table on the algorithm results page can become wider than the page container if the name of the scan is very long. The user then has to scroll to the right to see the "Open Result in Viewer" button, which is quite confusing.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/core/context_processors.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from guardian.shortcuts import get_perms
5 from guardian.utils import get_anonymous_user
6
7 from grandchallenge.blogs.models import Post
8 from grandchallenge.policies.models import Policy
9
10 logger = logging.getLogger(__name__)
11
12
13 def challenge(request):
14 try:
15 challenge = request.challenge
16
17 if challenge is None:
18 return {}
19
20 except AttributeError:
21 logger.warning(f"Could not get challenge for request: {request}")
22 return {}
23
24 try:
25 user = request.user
26 except AttributeError:
27 user = get_anonymous_user()
28
29 return {
30 "challenge": challenge,
31 "challenge_perms": get_perms(user, challenge),
32 "user_is_participant": challenge.is_participant(user),
33 "pages": challenge.page_set.all(),
34 }
35
36
37 def deployment_info(*_, **__):
38 return {
39 "google_analytics_id": settings.GOOGLE_ANALYTICS_ID,
40 "geochart_api_key": settings.GOOGLE_MAPS_API_KEY,
41 "COMMIT_ID": settings.COMMIT_ID,
42 }
43
44
45 def debug(*_, **__):
46 return {
47 "DEBUG": settings.DEBUG,
48 "ACTSTREAM_ENABLE": settings.ACTSTREAM_ENABLE,
49 }
50
51
52 def sentry_dsn(*_, **__):
53 return {
54 "SENTRY_DSN": settings.SENTRY_DSN,
55 "SENTRY_ENABLE_JS_REPORTING": settings.SENTRY_ENABLE_JS_REPORTING,
56 }
57
58
59 def footer_links(*_, **__):
60 return {
61 "policy_pages": Policy.objects.all(),
62 "blog_posts": Post.objects.filter(published=True),
63 }
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/core/context_processors.py b/app/grandchallenge/core/context_processors.py
--- a/app/grandchallenge/core/context_processors.py
+++ b/app/grandchallenge/core/context_processors.py
@@ -5,6 +5,7 @@
from guardian.utils import get_anonymous_user
from grandchallenge.blogs.models import Post
+from grandchallenge.participants.models import RegistrationRequest
from grandchallenge.policies.models import Policy
logger = logging.getLogger(__name__)
@@ -31,6 +32,9 @@
"challenge_perms": get_perms(user, challenge),
"user_is_participant": challenge.is_participant(user),
"pages": challenge.page_set.all(),
+ "pending_requests": challenge.registrationrequest_set.filter(
+ status=RegistrationRequest.PENDING
+ ),
}
| {"golden_diff": "diff --git a/app/grandchallenge/core/context_processors.py b/app/grandchallenge/core/context_processors.py\n--- a/app/grandchallenge/core/context_processors.py\n+++ b/app/grandchallenge/core/context_processors.py\n@@ -5,6 +5,7 @@\n from guardian.utils import get_anonymous_user\n \n from grandchallenge.blogs.models import Post\n+from grandchallenge.participants.models import RegistrationRequest\n from grandchallenge.policies.models import Policy\n \n logger = logging.getLogger(__name__)\n@@ -31,6 +32,9 @@\n \"challenge_perms\": get_perms(user, challenge),\n \"user_is_participant\": challenge.is_participant(user),\n \"pages\": challenge.page_set.all(),\n+ \"pending_requests\": challenge.registrationrequest_set.filter(\n+ status=RegistrationRequest.PENDING\n+ ),\n }\n", "issue": "Increase width of algorithm result table\nThe table on the algorithm results page can become wider than the page container if the name of the scan is very long. The user then has to scroll to the right to see the \"Open Result in Viewer\" button, which is quite confusing.\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom guardian.shortcuts import get_perms\nfrom guardian.utils import get_anonymous_user\n\nfrom grandchallenge.blogs.models import Post\nfrom grandchallenge.policies.models import Policy\n\nlogger = logging.getLogger(__name__)\n\n\ndef challenge(request):\n try:\n challenge = request.challenge\n\n if challenge is None:\n return {}\n\n except AttributeError:\n logger.warning(f\"Could not get challenge for request: {request}\")\n return {}\n\n try:\n user = request.user\n except AttributeError:\n user = get_anonymous_user()\n\n return {\n \"challenge\": challenge,\n \"challenge_perms\": get_perms(user, challenge),\n \"user_is_participant\": challenge.is_participant(user),\n \"pages\": challenge.page_set.all(),\n }\n\n\ndef deployment_info(*_, **__):\n return {\n \"google_analytics_id\": settings.GOOGLE_ANALYTICS_ID,\n \"geochart_api_key\": settings.GOOGLE_MAPS_API_KEY,\n \"COMMIT_ID\": settings.COMMIT_ID,\n }\n\n\ndef debug(*_, **__):\n return {\n \"DEBUG\": settings.DEBUG,\n \"ACTSTREAM_ENABLE\": settings.ACTSTREAM_ENABLE,\n }\n\n\ndef sentry_dsn(*_, **__):\n return {\n \"SENTRY_DSN\": settings.SENTRY_DSN,\n \"SENTRY_ENABLE_JS_REPORTING\": settings.SENTRY_ENABLE_JS_REPORTING,\n }\n\n\ndef footer_links(*_, **__):\n return {\n \"policy_pages\": Policy.objects.all(),\n \"blog_posts\": Post.objects.filter(published=True),\n }\n", "path": "app/grandchallenge/core/context_processors.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom guardian.shortcuts import get_perms\nfrom guardian.utils import get_anonymous_user\n\nfrom grandchallenge.blogs.models import Post\nfrom grandchallenge.participants.models import RegistrationRequest\nfrom grandchallenge.policies.models import Policy\n\nlogger = logging.getLogger(__name__)\n\n\ndef challenge(request):\n try:\n challenge = request.challenge\n\n if challenge is None:\n return {}\n\n except AttributeError:\n logger.warning(f\"Could not get challenge for request: {request}\")\n return {}\n\n try:\n user = request.user\n except AttributeError:\n user = get_anonymous_user()\n\n return {\n \"challenge\": challenge,\n \"challenge_perms\": get_perms(user, challenge),\n \"user_is_participant\": challenge.is_participant(user),\n \"pages\": challenge.page_set.all(),\n \"pending_requests\": challenge.registrationrequest_set.filter(\n status=RegistrationRequest.PENDING\n ),\n }\n\n\ndef deployment_info(*_, **__):\n return {\n \"google_analytics_id\": settings.GOOGLE_ANALYTICS_ID,\n \"geochart_api_key\": settings.GOOGLE_MAPS_API_KEY,\n \"COMMIT_ID\": settings.COMMIT_ID,\n }\n\n\ndef debug(*_, **__):\n return {\"DEBUG\": settings.DEBUG}\n\n\ndef sentry_dsn(*_, **__):\n return {\n \"SENTRY_DSN\": settings.SENTRY_DSN,\n \"SENTRY_ENABLE_JS_REPORTING\": settings.SENTRY_ENABLE_JS_REPORTING,\n }\n\n\ndef footer_links(*_, **__):\n return {\n \"policy_pages\": Policy.objects.all(),\n \"blog_posts\": Post.objects.filter(published=True),\n }\n", "path": "app/grandchallenge/core/context_processors.py"}]} | 841 | 170 |
gh_patches_debug_1255 | rasdani/github-patches | git_diff | ivy-llc__ivy-17989 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fmax
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/math.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
5
6
7 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
8 @to_ivy_arrays_and_back
9 def sin(x, name=None):
10 return ivy.sin(x)
11
12
13 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
14 @to_ivy_arrays_and_back
15 def cos(x, name=None):
16 return ivy.cos(x)
17
18
19 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
20 @to_ivy_arrays_and_back
21 def acos(x, name=None):
22 return ivy.acos(x)
23
24
25 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
26 @to_ivy_arrays_and_back
27 def cosh(x, name=None):
28 return ivy.cosh(x)
29
30
31 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
32 @to_ivy_arrays_and_back
33 def tanh(x, name=None):
34 return ivy.tanh(x)
35
36
37 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
38 @to_ivy_arrays_and_back
39 def acosh(x, name=None):
40 return ivy.acosh(x)
41
42
43 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
44 @to_ivy_arrays_and_back
45 def asin(x, name=None):
46 return ivy.asin(x)
47
48
49 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
50 @to_ivy_arrays_and_back
51 def log(x, name=None):
52 return ivy.log(x)
53
54
55 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
56 @to_ivy_arrays_and_back
57 def divide(x, y, name=None):
58 return ivy.divide(x, y)
59
60
61 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
62 @to_ivy_arrays_and_back
63 def abs(x, name=None):
64 return ivy.abs(x)
65
66
67 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
68 @to_ivy_arrays_and_back
69 def multiply(x, y, name=None):
70 return ivy.multiply(x, y)
71
72
73 @with_unsupported_dtypes(
74 {"2.5.0 and below": ("bool", "unsigned", "int8", "float16", "bfloat16")}, "paddle"
75 )
76 @to_ivy_arrays_and_back
77 def add(x, y, name=None):
78 return ivy.add(x, y)
79
80
81 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
82 @to_ivy_arrays_and_back
83 def subtract(x, y, name=None):
84 return ivy.subtract(x, y)
85
86
87 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
88 @to_ivy_arrays_and_back
89 def sqrt(x, name=None):
90 return ivy.sqrt(x)
91
92
93 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
94 @to_ivy_arrays_and_back
95 def atanh(x, name=None):
96 return ivy.atanh(x)
97
98
99 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
100 @to_ivy_arrays_and_back
101 def atan(x, name=None):
102 return ivy.atan(x)
103
104
105 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
106 @to_ivy_arrays_and_back
107 def round(x, name=None):
108 return ivy.round(x)
109
110
111 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
112 @to_ivy_arrays_and_back
113 def ceil(x, name=None):
114 return ivy.ceil(x)
115
116
117 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
118 @to_ivy_arrays_and_back
119 def sinh(x, name=None):
120 return ivy.sinh(x)
121
122
123 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
124 @to_ivy_arrays_and_back
125 def pow(x, y, name=None):
126 return ivy.pow(x, y)
127
128
129 @with_unsupported_dtypes({"2.4.2 and below": ("int16", "float16")}, "paddle")
130 @to_ivy_arrays_and_back
131 def conj(x, name=None):
132 return ivy.conj(x)
133
134
135 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
136 @to_ivy_arrays_and_back
137 def floor(x, name=None):
138 return ivy.floor(x)
139
140
141 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
142 @to_ivy_arrays_and_back
143 def remainder(x, y, name=None):
144 return ivy.remainder(x, y)
145
146
147 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
148 @to_ivy_arrays_and_back
149 def log2(x, name=None):
150 return ivy.log2(x)
151
152
153 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
154 @to_ivy_arrays_and_back
155 def log1p(x, name=None):
156 return ivy.log1p(x)
157
158
159 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
160 @to_ivy_arrays_and_back
161 def rad2deg(x, name=None):
162 return ivy.rad2deg(x)
163
164
165 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
166 @to_ivy_arrays_and_back
167 def deg2rad(x, name=None):
168 return ivy.deg2rad(x)
169
170
171 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
172 @to_ivy_arrays_and_back
173 def gcd(x, y, name=None):
174 return ivy.gcd(x, y)
175
176
177 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
178 @to_ivy_arrays_and_back
179 def tan(x, name=None):
180 return ivy.tan(x)
181
182
183 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
184 @to_ivy_arrays_and_back
185 def atan2(x, y, name=None):
186 return ivy.atan2(x, y)
187
188
189 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
190 @to_ivy_arrays_and_back
191 def square(x, name=None):
192 return ivy.square(x)
193
194
195 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
196 @to_ivy_arrays_and_back
197 def sign(x, name=None):
198 return ivy.sign(x)
199
200
201 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
202 @to_ivy_arrays_and_back
203 def neg(x, name=None):
204 return ivy.negative(x)
205
206
207 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
208 @to_ivy_arrays_and_back
209 def exp(x, name=None):
210 return ivy.exp(x)
211
212
213 @with_supported_dtypes(
214 {
215 "2.4.2 and below": (
216 "float32",
217 "float64",
218 "int32",
219 "int64",
220 "complex64",
221 "complex128",
222 )
223 },
224 "paddle",
225 )
226 @to_ivy_arrays_and_back
227 def cumprod(x, dim=None, dtype=None, name=None):
228 return ivy.cumprod(x, axis=dim, dtype=dtype)
229
230
231 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
232 @to_ivy_arrays_and_back
233 def reciprocal(x, name=None):
234 return ivy.reciprocal(x)
235
236
237 @with_supported_dtypes(
238 {"2.5.0 and below": ("complex64", "complex128", "float32", "float64")},
239 "paddle",
240 )
241 @to_ivy_arrays_and_back
242 def angle(x, name=None):
243 return ivy.angle(x)
244
245
246 @with_unsupported_dtypes({"2.5.0 and below": "bfloat16"}, "paddle")
247 @to_ivy_arrays_and_back
248 def fmin(x, y, name=None):
249 return ivy.fmin(x, y)
250
251
252 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
253 @to_ivy_arrays_and_back
254 def logit(x, eps=None, name=None):
255 return ivy.logit(x, eps=eps)
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py
--- a/ivy/functional/frontends/paddle/tensor/math.py
+++ b/ivy/functional/frontends/paddle/tensor/math.py
@@ -253,3 +253,9 @@
@to_ivy_arrays_and_back
def logit(x, eps=None, name=None):
return ivy.logit(x, eps=eps)
+
+
+@with_unsupported_dtypes({"2.5.0 and below": "bfloat16"}, "paddle")
+@to_ivy_arrays_and_back
+def fmax(x, y, name=None):
+ return ivy.fmax(x, y)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py\n--- a/ivy/functional/frontends/paddle/tensor/math.py\n+++ b/ivy/functional/frontends/paddle/tensor/math.py\n@@ -253,3 +253,9 @@\n @to_ivy_arrays_and_back\n def logit(x, eps=None, name=None):\n return ivy.logit(x, eps=eps)\n+\n+\n+@with_unsupported_dtypes({\"2.5.0 and below\": \"bfloat16\"}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def fmax(x, y, name=None):\n+ return ivy.fmax(x, y)\n", "issue": "fmax\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef conj(x, name=None):\n return ivy.conj(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rad2deg(x, name=None):\n return ivy.rad2deg(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef deg2rad(x, name=None):\n return ivy.deg2rad(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef gcd(x, y, name=None):\n return ivy.gcd(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tan(x, name=None):\n return ivy.tan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan2(x, y, name=None):\n return ivy.atan2(x, y)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef square(x, name=None):\n return ivy.square(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sign(x, name=None):\n return ivy.sign(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef neg(x, name=None):\n return ivy.negative(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef exp(x, name=None):\n return ivy.exp(x)\n\n\n@with_supported_dtypes(\n {\n \"2.4.2 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cumprod(x, dim=None, dtype=None, name=None):\n return ivy.cumprod(x, axis=dim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef reciprocal(x, name=None):\n return ivy.reciprocal(x)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef angle(x, name=None):\n return ivy.angle(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmin(x, y, name=None):\n return ivy.fmin(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef logit(x, eps=None, name=None):\n return ivy.logit(x, eps=eps)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef conj(x, name=None):\n return ivy.conj(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rad2deg(x, name=None):\n return ivy.rad2deg(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef deg2rad(x, name=None):\n return ivy.deg2rad(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef gcd(x, y, name=None):\n return ivy.gcd(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tan(x, name=None):\n return ivy.tan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan2(x, y, name=None):\n return ivy.atan2(x, y)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef square(x, name=None):\n return ivy.square(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sign(x, name=None):\n return ivy.sign(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef neg(x, name=None):\n return ivy.negative(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef exp(x, name=None):\n return ivy.exp(x)\n\n\n@with_supported_dtypes(\n {\n \"2.4.2 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cumprod(x, dim=None, dtype=None, name=None):\n return ivy.cumprod(x, axis=dim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef reciprocal(x, name=None):\n return ivy.reciprocal(x)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef angle(x, name=None):\n return ivy.angle(x)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmin(x, y, name=None):\n return ivy.fmin(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef logit(x, eps=None, name=None):\n return ivy.logit(x, eps=eps)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmax(x, y, name=None):\n return ivy.fmax(x, y)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}]} | 3,255 | 164 |
gh_patches_debug_26483 | rasdani/github-patches | git_diff | getnikola__nikola-3482 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add full GLOBAL_CONTEXT support to the post_list plugin
<!--
Before creating an issue:
* make sure you are using an up-to-date version of Nikola
* search for existing issues that might be related
Describe your requested features as precisely as possible. -->
I've got some data and functions in `GLOBAL_CONTEXT` that I'd like to use in a custom post list template. Right now, it appears that only the locale's date format is passed along to the template context.
Would you accept a PR to make all of the `GLOBAL_CONTEXT` available to the plugin?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/shortcode/post_list.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2013-2020 Udo Spallek, Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Post list shortcode."""
28
29
30 import operator
31 import os
32 import uuid
33
34 import natsort
35
36 from nikola import utils
37 from nikola.packages.datecond import date_in_range
38 from nikola.plugin_categories import ShortcodePlugin
39
40
41 class PostListShortcode(ShortcodePlugin):
42 """Provide a shortcode to create a list of posts.
43
44 Post List
45 =========
46 :Directive Arguments: None.
47 :Directive Options: lang, start, stop, reverse, sort, date, tags, categories, sections, slugs, post_type, template, id
48 :Directive Content: None.
49
50 The posts appearing in the list can be filtered by options.
51 *List slicing* is provided with the *start*, *stop* and *reverse* options.
52
53 The following not required options are recognized:
54
55 ``start`` : integer
56 The index of the first post to show.
57 A negative value like ``-3`` will show the *last* three posts in the
58 post-list.
59 Defaults to None.
60
61 ``stop`` : integer
62 The index of the last post to show.
63 A value negative value like ``-1`` will show every post, but not the
64 *last* in the post-list.
65 Defaults to None.
66
67 ``reverse`` : flag
68 Reverse the order of the post-list.
69 Defaults is to not reverse the order of posts.
70
71 ``sort`` : string
72 Sort post list by one of each post's attributes, usually ``title`` or a
73 custom ``priority``. Defaults to None (chronological sorting).
74
75 ``date`` : string
76 Show posts that match date range specified by this option. Format:
77
78 * comma-separated clauses (AND)
79 * clause: attribute comparison_operator value (spaces optional)
80 * attribute: year, month, day, hour, month, second, weekday, isoweekday; or empty for full datetime
81 * comparison_operator: == != <= >= < >
82 * value: integer, 'now', 'today', or dateutil-compatible date input
83
84 ``tags`` : string [, string...]
85 Filter posts to show only posts having at least one of the ``tags``.
86 Defaults to None.
87
88 ``require_all_tags`` : flag
89 Change tag filter behaviour to show only posts that have all specified ``tags``.
90 Defaults to False.
91
92 ``categories`` : string [, string...]
93 Filter posts to show only posts having one of the ``categories``.
94 Defaults to None.
95
96 ``sections`` : string [, string...]
97 Filter posts to show only posts having one of the ``sections``.
98 Defaults to None.
99
100 ``slugs`` : string [, string...]
101 Filter posts to show only posts having at least one of the ``slugs``.
102 Defaults to None.
103
104 ``post_type`` (or ``type``) : string
105 Show only ``posts``, ``pages`` or ``all``.
106 Replaces ``all``. Defaults to ``posts``.
107
108 ``lang`` : string
109 The language of post *titles* and *links*.
110 Defaults to default language.
111
112 ``template`` : string
113 The name of an alternative template to render the post-list.
114 Defaults to ``post_list_directive.tmpl``
115
116 ``id`` : string
117 A manual id for the post list.
118 Defaults to a random name composed by 'post_list_' + uuid.uuid4().hex.
119 """
120
121 name = "post_list"
122
123 def set_site(self, site):
124 """Set the site."""
125 super().set_site(site)
126 site.register_shortcode('post-list', self.handler)
127
128 def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_tags=False, categories=None,
129 sections=None, slugs=None, post_type='post', type=False,
130 lang=None, template='post_list_directive.tmpl', sort=None,
131 id=None, data=None, state=None, site=None, date=None, filename=None, post=None):
132 """Generate HTML for post-list."""
133 if lang is None:
134 lang = utils.LocaleBorg().current_lang
135 if site.invariant: # for testing purposes
136 post_list_id = id or 'post_list_' + 'fixedvaluethatisnotauuid'
137 else:
138 post_list_id = id or 'post_list_' + uuid.uuid4().hex
139
140 # Get post from filename if available
141 if filename:
142 self_post = site.post_per_input_file.get(filename)
143 else:
144 self_post = None
145
146 if self_post:
147 self_post.register_depfile("####MAGIC####TIMELINE", lang=lang)
148
149 # If we get strings for start/stop, make them integers
150 if start is not None:
151 start = int(start)
152 if stop is not None:
153 stop = int(stop)
154
155 # Parse tags/categories/sections/slugs (input is strings)
156 categories = [c.strip().lower() for c in categories.split(',')] if categories else []
157 sections = [s.strip().lower() for s in sections.split(',')] if sections else []
158 slugs = [s.strip() for s in slugs.split(',')] if slugs else []
159
160 filtered_timeline = []
161 posts = []
162 step = None if reverse is False else -1
163
164 if type is not False:
165 post_type = type
166
167 if post_type == 'page' or post_type == 'pages':
168 timeline = [p for p in site.timeline if not p.use_in_feeds]
169 elif post_type == 'all':
170 timeline = [p for p in site.timeline]
171 else: # post
172 timeline = [p for p in site.timeline if p.use_in_feeds]
173
174 # self_post should be removed from timeline because this is redundant
175 timeline = [p for p in timeline if p.source_path != filename]
176
177 if categories:
178 timeline = [p for p in timeline if p.meta('category', lang=lang).lower() in categories]
179
180 if sections:
181 timeline = [p for p in timeline if p.section_name(lang).lower() in sections]
182
183 if tags:
184 tags = {t.strip().lower() for t in tags.split(',')}
185 if require_all_tags:
186 compare = set.issubset
187 else:
188 compare = operator.and_
189 for post in timeline:
190 post_tags = {t.lower() for t in post.tags}
191 if compare(tags, post_tags):
192 filtered_timeline.append(post)
193 else:
194 filtered_timeline = timeline
195
196 if sort:
197 filtered_timeline = natsort.natsorted(filtered_timeline, key=lambda post: post.meta[lang][sort], alg=natsort.ns.F | natsort.ns.IC)
198
199 if date:
200 _now = utils.current_time()
201 filtered_timeline = [p for p in filtered_timeline if date_in_range(utils.html_unescape(date), p.date, now=_now)]
202
203 for post in filtered_timeline[start:stop:step]:
204 if slugs:
205 cont = True
206 for slug in slugs:
207 if slug == post.meta('slug'):
208 cont = False
209
210 if cont:
211 continue
212
213 bp = post.translated_base_path(lang)
214 if os.path.exists(bp) and state:
215 state.document.settings.record_dependencies.add(bp)
216 elif os.path.exists(bp) and self_post:
217 self_post.register_depfile(bp, lang=lang)
218
219 posts += [post]
220
221 template_deps = site.template_system.template_deps(template)
222 if state:
223 # Register template as a dependency (Issue #2391)
224 for d in template_deps:
225 state.document.settings.record_dependencies.add(d)
226 elif self_post:
227 for d in template_deps:
228 self_post.register_depfile(d, lang=lang)
229
230 template_data = {
231 'lang': lang,
232 'posts': posts,
233 # Need to provide str, not TranslatableSetting (Issue #2104)
234 'date_format': site.GLOBAL_CONTEXT.get('date_format')[lang],
235 'post_list_id': post_list_id,
236 'messages': site.MESSAGES,
237 '_link': site.link,
238 }
239 output = site.template_system.render_template(
240 template, None, template_data)
241 return output, template_deps
242
243
244 # Request file name from shortcode (Issue #2412)
245 PostListShortcode.handler.nikola_shortcode_pass_filename = True
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/shortcode/post_list.py b/nikola/plugins/shortcode/post_list.py
--- a/nikola/plugins/shortcode/post_list.py
+++ b/nikola/plugins/shortcode/post_list.py
@@ -145,6 +145,7 @@
if self_post:
self_post.register_depfile("####MAGIC####TIMELINE", lang=lang)
+ self_post.register_depfile("####MAGIC####CONFIG:GLOBAL_CONTEXT", lang=lang)
# If we get strings for start/stop, make them integers
if start is not None:
@@ -227,7 +228,8 @@
for d in template_deps:
self_post.register_depfile(d, lang=lang)
- template_data = {
+ template_data = site.GLOBAL_CONTEXT.copy()
+ template_data.update({
'lang': lang,
'posts': posts,
# Need to provide str, not TranslatableSetting (Issue #2104)
@@ -235,7 +237,7 @@
'post_list_id': post_list_id,
'messages': site.MESSAGES,
'_link': site.link,
- }
+ })
output = site.template_system.render_template(
template, None, template_data)
return output, template_deps
| {"golden_diff": "diff --git a/nikola/plugins/shortcode/post_list.py b/nikola/plugins/shortcode/post_list.py\n--- a/nikola/plugins/shortcode/post_list.py\n+++ b/nikola/plugins/shortcode/post_list.py\n@@ -145,6 +145,7 @@\n \n if self_post:\n self_post.register_depfile(\"####MAGIC####TIMELINE\", lang=lang)\n+ self_post.register_depfile(\"####MAGIC####CONFIG:GLOBAL_CONTEXT\", lang=lang)\n \n # If we get strings for start/stop, make them integers\n if start is not None:\n@@ -227,7 +228,8 @@\n for d in template_deps:\n self_post.register_depfile(d, lang=lang)\n \n- template_data = {\n+ template_data = site.GLOBAL_CONTEXT.copy()\n+ template_data.update({\n 'lang': lang,\n 'posts': posts,\n # Need to provide str, not TranslatableSetting (Issue #2104)\n@@ -235,7 +237,7 @@\n 'post_list_id': post_list_id,\n 'messages': site.MESSAGES,\n '_link': site.link,\n- }\n+ })\n output = site.template_system.render_template(\n template, None, template_data)\n return output, template_deps\n", "issue": "Add full GLOBAL_CONTEXT support to the post_list plugin\n<!--\r\nBefore creating an issue:\r\n* make sure you are using an up-to-date version of Nikola\r\n* search for existing issues that might be related\r\n\r\nDescribe your requested features as precisely as possible. -->\r\n\r\nI've got some data and functions in `GLOBAL_CONTEXT` that I'd like to use in a custom post list template. Right now, it appears that only the locale's date format is passed along to the template context.\r\n\r\nWould you accept a PR to make all of the `GLOBAL_CONTEXT` available to the plugin?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2020 Udo Spallek, Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Post list shortcode.\"\"\"\n\n\nimport operator\nimport os\nimport uuid\n\nimport natsort\n\nfrom nikola import utils\nfrom nikola.packages.datecond import date_in_range\nfrom nikola.plugin_categories import ShortcodePlugin\n\n\nclass PostListShortcode(ShortcodePlugin):\n \"\"\"Provide a shortcode to create a list of posts.\n\n Post List\n =========\n :Directive Arguments: None.\n :Directive Options: lang, start, stop, reverse, sort, date, tags, categories, sections, slugs, post_type, template, id\n :Directive Content: None.\n\n The posts appearing in the list can be filtered by options.\n *List slicing* is provided with the *start*, *stop* and *reverse* options.\n\n The following not required options are recognized:\n\n ``start`` : integer\n The index of the first post to show.\n A negative value like ``-3`` will show the *last* three posts in the\n post-list.\n Defaults to None.\n\n ``stop`` : integer\n The index of the last post to show.\n A value negative value like ``-1`` will show every post, but not the\n *last* in the post-list.\n Defaults to None.\n\n ``reverse`` : flag\n Reverse the order of the post-list.\n Defaults is to not reverse the order of posts.\n\n ``sort`` : string\n Sort post list by one of each post's attributes, usually ``title`` or a\n custom ``priority``. Defaults to None (chronological sorting).\n\n ``date`` : string\n Show posts that match date range specified by this option. Format:\n\n * comma-separated clauses (AND)\n * clause: attribute comparison_operator value (spaces optional)\n * attribute: year, month, day, hour, month, second, weekday, isoweekday; or empty for full datetime\n * comparison_operator: == != <= >= < >\n * value: integer, 'now', 'today', or dateutil-compatible date input\n\n ``tags`` : string [, string...]\n Filter posts to show only posts having at least one of the ``tags``.\n Defaults to None.\n\n ``require_all_tags`` : flag\n Change tag filter behaviour to show only posts that have all specified ``tags``.\n Defaults to False.\n\n ``categories`` : string [, string...]\n Filter posts to show only posts having one of the ``categories``.\n Defaults to None.\n\n ``sections`` : string [, string...]\n Filter posts to show only posts having one of the ``sections``.\n Defaults to None.\n\n ``slugs`` : string [, string...]\n Filter posts to show only posts having at least one of the ``slugs``.\n Defaults to None.\n\n ``post_type`` (or ``type``) : string\n Show only ``posts``, ``pages`` or ``all``.\n Replaces ``all``. Defaults to ``posts``.\n\n ``lang`` : string\n The language of post *titles* and *links*.\n Defaults to default language.\n\n ``template`` : string\n The name of an alternative template to render the post-list.\n Defaults to ``post_list_directive.tmpl``\n\n ``id`` : string\n A manual id for the post list.\n Defaults to a random name composed by 'post_list_' + uuid.uuid4().hex.\n \"\"\"\n\n name = \"post_list\"\n\n def set_site(self, site):\n \"\"\"Set the site.\"\"\"\n super().set_site(site)\n site.register_shortcode('post-list', self.handler)\n\n def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_tags=False, categories=None,\n sections=None, slugs=None, post_type='post', type=False,\n lang=None, template='post_list_directive.tmpl', sort=None,\n id=None, data=None, state=None, site=None, date=None, filename=None, post=None):\n \"\"\"Generate HTML for post-list.\"\"\"\n if lang is None:\n lang = utils.LocaleBorg().current_lang\n if site.invariant: # for testing purposes\n post_list_id = id or 'post_list_' + 'fixedvaluethatisnotauuid'\n else:\n post_list_id = id or 'post_list_' + uuid.uuid4().hex\n\n # Get post from filename if available\n if filename:\n self_post = site.post_per_input_file.get(filename)\n else:\n self_post = None\n\n if self_post:\n self_post.register_depfile(\"####MAGIC####TIMELINE\", lang=lang)\n\n # If we get strings for start/stop, make them integers\n if start is not None:\n start = int(start)\n if stop is not None:\n stop = int(stop)\n\n # Parse tags/categories/sections/slugs (input is strings)\n categories = [c.strip().lower() for c in categories.split(',')] if categories else []\n sections = [s.strip().lower() for s in sections.split(',')] if sections else []\n slugs = [s.strip() for s in slugs.split(',')] if slugs else []\n\n filtered_timeline = []\n posts = []\n step = None if reverse is False else -1\n\n if type is not False:\n post_type = type\n\n if post_type == 'page' or post_type == 'pages':\n timeline = [p for p in site.timeline if not p.use_in_feeds]\n elif post_type == 'all':\n timeline = [p for p in site.timeline]\n else: # post\n timeline = [p for p in site.timeline if p.use_in_feeds]\n\n # self_post should be removed from timeline because this is redundant\n timeline = [p for p in timeline if p.source_path != filename]\n\n if categories:\n timeline = [p for p in timeline if p.meta('category', lang=lang).lower() in categories]\n\n if sections:\n timeline = [p for p in timeline if p.section_name(lang).lower() in sections]\n\n if tags:\n tags = {t.strip().lower() for t in tags.split(',')}\n if require_all_tags:\n compare = set.issubset\n else:\n compare = operator.and_\n for post in timeline:\n post_tags = {t.lower() for t in post.tags}\n if compare(tags, post_tags):\n filtered_timeline.append(post)\n else:\n filtered_timeline = timeline\n\n if sort:\n filtered_timeline = natsort.natsorted(filtered_timeline, key=lambda post: post.meta[lang][sort], alg=natsort.ns.F | natsort.ns.IC)\n\n if date:\n _now = utils.current_time()\n filtered_timeline = [p for p in filtered_timeline if date_in_range(utils.html_unescape(date), p.date, now=_now)]\n\n for post in filtered_timeline[start:stop:step]:\n if slugs:\n cont = True\n for slug in slugs:\n if slug == post.meta('slug'):\n cont = False\n\n if cont:\n continue\n\n bp = post.translated_base_path(lang)\n if os.path.exists(bp) and state:\n state.document.settings.record_dependencies.add(bp)\n elif os.path.exists(bp) and self_post:\n self_post.register_depfile(bp, lang=lang)\n\n posts += [post]\n\n template_deps = site.template_system.template_deps(template)\n if state:\n # Register template as a dependency (Issue #2391)\n for d in template_deps:\n state.document.settings.record_dependencies.add(d)\n elif self_post:\n for d in template_deps:\n self_post.register_depfile(d, lang=lang)\n\n template_data = {\n 'lang': lang,\n 'posts': posts,\n # Need to provide str, not TranslatableSetting (Issue #2104)\n 'date_format': site.GLOBAL_CONTEXT.get('date_format')[lang],\n 'post_list_id': post_list_id,\n 'messages': site.MESSAGES,\n '_link': site.link,\n }\n output = site.template_system.render_template(\n template, None, template_data)\n return output, template_deps\n\n\n# Request file name from shortcode (Issue #2412)\nPostListShortcode.handler.nikola_shortcode_pass_filename = True\n", "path": "nikola/plugins/shortcode/post_list.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2020 Udo Spallek, Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Post list shortcode.\"\"\"\n\n\nimport operator\nimport os\nimport uuid\n\nimport natsort\n\nfrom nikola import utils\nfrom nikola.packages.datecond import date_in_range\nfrom nikola.plugin_categories import ShortcodePlugin\n\n\nclass PostListShortcode(ShortcodePlugin):\n \"\"\"Provide a shortcode to create a list of posts.\n\n Post List\n =========\n :Directive Arguments: None.\n :Directive Options: lang, start, stop, reverse, sort, date, tags, categories, sections, slugs, post_type, template, id\n :Directive Content: None.\n\n The posts appearing in the list can be filtered by options.\n *List slicing* is provided with the *start*, *stop* and *reverse* options.\n\n The following not required options are recognized:\n\n ``start`` : integer\n The index of the first post to show.\n A negative value like ``-3`` will show the *last* three posts in the\n post-list.\n Defaults to None.\n\n ``stop`` : integer\n The index of the last post to show.\n A value negative value like ``-1`` will show every post, but not the\n *last* in the post-list.\n Defaults to None.\n\n ``reverse`` : flag\n Reverse the order of the post-list.\n Defaults is to not reverse the order of posts.\n\n ``sort`` : string\n Sort post list by one of each post's attributes, usually ``title`` or a\n custom ``priority``. Defaults to None (chronological sorting).\n\n ``date`` : string\n Show posts that match date range specified by this option. Format:\n\n * comma-separated clauses (AND)\n * clause: attribute comparison_operator value (spaces optional)\n * attribute: year, month, day, hour, month, second, weekday, isoweekday; or empty for full datetime\n * comparison_operator: == != <= >= < >\n * value: integer, 'now', 'today', or dateutil-compatible date input\n\n ``tags`` : string [, string...]\n Filter posts to show only posts having at least one of the ``tags``.\n Defaults to None.\n\n ``require_all_tags`` : flag\n Change tag filter behaviour to show only posts that have all specified ``tags``.\n Defaults to False.\n\n ``categories`` : string [, string...]\n Filter posts to show only posts having one of the ``categories``.\n Defaults to None.\n\n ``sections`` : string [, string...]\n Filter posts to show only posts having one of the ``sections``.\n Defaults to None.\n\n ``slugs`` : string [, string...]\n Filter posts to show only posts having at least one of the ``slugs``.\n Defaults to None.\n\n ``post_type`` (or ``type``) : string\n Show only ``posts``, ``pages`` or ``all``.\n Replaces ``all``. Defaults to ``posts``.\n\n ``lang`` : string\n The language of post *titles* and *links*.\n Defaults to default language.\n\n ``template`` : string\n The name of an alternative template to render the post-list.\n Defaults to ``post_list_directive.tmpl``\n\n ``id`` : string\n A manual id for the post list.\n Defaults to a random name composed by 'post_list_' + uuid.uuid4().hex.\n \"\"\"\n\n name = \"post_list\"\n\n def set_site(self, site):\n \"\"\"Set the site.\"\"\"\n super().set_site(site)\n site.register_shortcode('post-list', self.handler)\n\n def handler(self, start=None, stop=None, reverse=False, tags=None, require_all_tags=False, categories=None,\n sections=None, slugs=None, post_type='post', type=False,\n lang=None, template='post_list_directive.tmpl', sort=None,\n id=None, data=None, state=None, site=None, date=None, filename=None, post=None):\n \"\"\"Generate HTML for post-list.\"\"\"\n if lang is None:\n lang = utils.LocaleBorg().current_lang\n if site.invariant: # for testing purposes\n post_list_id = id or 'post_list_' + 'fixedvaluethatisnotauuid'\n else:\n post_list_id = id or 'post_list_' + uuid.uuid4().hex\n\n # Get post from filename if available\n if filename:\n self_post = site.post_per_input_file.get(filename)\n else:\n self_post = None\n\n if self_post:\n self_post.register_depfile(\"####MAGIC####TIMELINE\", lang=lang)\n self_post.register_depfile(\"####MAGIC####CONFIG:GLOBAL_CONTEXT\", lang=lang)\n\n # If we get strings for start/stop, make them integers\n if start is not None:\n start = int(start)\n if stop is not None:\n stop = int(stop)\n\n # Parse tags/categories/sections/slugs (input is strings)\n categories = [c.strip().lower() for c in categories.split(',')] if categories else []\n sections = [s.strip().lower() for s in sections.split(',')] if sections else []\n slugs = [s.strip() for s in slugs.split(',')] if slugs else []\n\n filtered_timeline = []\n posts = []\n step = None if reverse is False else -1\n\n if type is not False:\n post_type = type\n\n if post_type == 'page' or post_type == 'pages':\n timeline = [p for p in site.timeline if not p.use_in_feeds]\n elif post_type == 'all':\n timeline = [p for p in site.timeline]\n else: # post\n timeline = [p for p in site.timeline if p.use_in_feeds]\n\n # self_post should be removed from timeline because this is redundant\n timeline = [p for p in timeline if p.source_path != filename]\n\n if categories:\n timeline = [p for p in timeline if p.meta('category', lang=lang).lower() in categories]\n\n if sections:\n timeline = [p for p in timeline if p.section_name(lang).lower() in sections]\n\n if tags:\n tags = {t.strip().lower() for t in tags.split(',')}\n if require_all_tags:\n compare = set.issubset\n else:\n compare = operator.and_\n for post in timeline:\n post_tags = {t.lower() for t in post.tags}\n if compare(tags, post_tags):\n filtered_timeline.append(post)\n else:\n filtered_timeline = timeline\n\n if sort:\n filtered_timeline = natsort.natsorted(filtered_timeline, key=lambda post: post.meta[lang][sort], alg=natsort.ns.F | natsort.ns.IC)\n\n if date:\n _now = utils.current_time()\n filtered_timeline = [p for p in filtered_timeline if date_in_range(utils.html_unescape(date), p.date, now=_now)]\n\n for post in filtered_timeline[start:stop:step]:\n if slugs:\n cont = True\n for slug in slugs:\n if slug == post.meta('slug'):\n cont = False\n\n if cont:\n continue\n\n bp = post.translated_base_path(lang)\n if os.path.exists(bp) and state:\n state.document.settings.record_dependencies.add(bp)\n elif os.path.exists(bp) and self_post:\n self_post.register_depfile(bp, lang=lang)\n\n posts += [post]\n\n template_deps = site.template_system.template_deps(template)\n if state:\n # Register template as a dependency (Issue #2391)\n for d in template_deps:\n state.document.settings.record_dependencies.add(d)\n elif self_post:\n for d in template_deps:\n self_post.register_depfile(d, lang=lang)\n\n template_data = site.GLOBAL_CONTEXT.copy()\n template_data.update({\n 'lang': lang,\n 'posts': posts,\n # Need to provide str, not TranslatableSetting (Issue #2104)\n 'date_format': site.GLOBAL_CONTEXT.get('date_format')[lang],\n 'post_list_id': post_list_id,\n 'messages': site.MESSAGES,\n '_link': site.link,\n })\n output = site.template_system.render_template(\n template, None, template_data)\n return output, template_deps\n\n\n# Request file name from shortcode (Issue #2412)\nPostListShortcode.handler.nikola_shortcode_pass_filename = True\n", "path": "nikola/plugins/shortcode/post_list.py"}]} | 3,083 | 288 |
gh_patches_debug_25878 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-7567 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
primanti_bros_us: switch to YextSpider as Where2GetIt seemingly no longer used
The store locator at `https://restaurants.primantibros.com/search` now uses Yext APIs for querying store locations, not Where2GetIt.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/primanti_bros_us.py`
Content:
```
1 from locations.categories import Extras, apply_yes_no
2 from locations.hours import DAYS_FULL, OpeningHours
3 from locations.storefinders.where2getit import Where2GetItSpider
4
5
6 class PrimantiBrosUSSpider(Where2GetItSpider):
7 name = "primanti_bros_us"
8 item_attributes = {"brand": "Primanti Bros", "brand_wikidata": "Q7243049"}
9 api_brand_name = "primantibros"
10 api_key = "7CDBB1A2-4AC6-11EB-932C-8917919C4603"
11
12 def parse_item(self, item, location):
13 item["ref"] = location["uid"]
14 item["street_address"] = ", ".join(filter(None, [location.get("address1"), location.get("address2")]))
15 item["website"] = location.get("menuurl")
16 item["opening_hours"] = OpeningHours()
17 hours_string = ""
18 for day_name in DAYS_FULL:
19 hours_string = f"{hours_string} {day_name}: " + location["{}hours".format(day_name.lower())]
20 item["opening_hours"].add_ranges_from_string(hours_string)
21 apply_yes_no(Extras.DRIVE_THROUGH, item, location["has_drive_through"] == "1", False)
22 yield item
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/primanti_bros_us.py b/locations/spiders/primanti_bros_us.py
--- a/locations/spiders/primanti_bros_us.py
+++ b/locations/spiders/primanti_bros_us.py
@@ -1,22 +1,18 @@
-from locations.categories import Extras, apply_yes_no
-from locations.hours import DAYS_FULL, OpeningHours
-from locations.storefinders.where2getit import Where2GetItSpider
+from locations.categories import Categories
+from locations.storefinders.yext import YextSpider
-class PrimantiBrosUSSpider(Where2GetItSpider):
+class PrimantiBrosUSSpider(YextSpider):
name = "primanti_bros_us"
- item_attributes = {"brand": "Primanti Bros", "brand_wikidata": "Q7243049"}
- api_brand_name = "primantibros"
- api_key = "7CDBB1A2-4AC6-11EB-932C-8917919C4603"
+ item_attributes = {"brand": "Primanti Bros", "brand_wikidata": "Q7243049", "extras": Categories.RESTAURANT.value}
+ api_key = "7515c25fc685bbdd7c5975b6573c6912"
+ api_version = "20220511"
def parse_item(self, item, location):
- item["ref"] = location["uid"]
- item["street_address"] = ", ".join(filter(None, [location.get("address1"), location.get("address2")]))
- item["website"] = location.get("menuurl")
- item["opening_hours"] = OpeningHours()
- hours_string = ""
- for day_name in DAYS_FULL:
- hours_string = f"{hours_string} {day_name}: " + location["{}hours".format(day_name.lower())]
- item["opening_hours"].add_ranges_from_string(hours_string)
- apply_yes_no(Extras.DRIVE_THROUGH, item, location["has_drive_through"] == "1", False)
+ if "test-location" in item["ref"]:
+ return
+ item["ref"] = location.get("c_pagesURL")
+ item["name"] = location.get("c_searchName")
+ item["website"] = location.get("c_pagesURL")
+ item.pop("twitter", None)
yield item
| {"golden_diff": "diff --git a/locations/spiders/primanti_bros_us.py b/locations/spiders/primanti_bros_us.py\n--- a/locations/spiders/primanti_bros_us.py\n+++ b/locations/spiders/primanti_bros_us.py\n@@ -1,22 +1,18 @@\n-from locations.categories import Extras, apply_yes_no\n-from locations.hours import DAYS_FULL, OpeningHours\n-from locations.storefinders.where2getit import Where2GetItSpider\n+from locations.categories import Categories\n+from locations.storefinders.yext import YextSpider\n \n \n-class PrimantiBrosUSSpider(Where2GetItSpider):\n+class PrimantiBrosUSSpider(YextSpider):\n name = \"primanti_bros_us\"\n- item_attributes = {\"brand\": \"Primanti Bros\", \"brand_wikidata\": \"Q7243049\"}\n- api_brand_name = \"primantibros\"\n- api_key = \"7CDBB1A2-4AC6-11EB-932C-8917919C4603\"\n+ item_attributes = {\"brand\": \"Primanti Bros\", \"brand_wikidata\": \"Q7243049\", \"extras\": Categories.RESTAURANT.value}\n+ api_key = \"7515c25fc685bbdd7c5975b6573c6912\"\n+ api_version = \"20220511\"\n \n def parse_item(self, item, location):\n- item[\"ref\"] = location[\"uid\"]\n- item[\"street_address\"] = \", \".join(filter(None, [location.get(\"address1\"), location.get(\"address2\")]))\n- item[\"website\"] = location.get(\"menuurl\")\n- item[\"opening_hours\"] = OpeningHours()\n- hours_string = \"\"\n- for day_name in DAYS_FULL:\n- hours_string = f\"{hours_string} {day_name}: \" + location[\"{}hours\".format(day_name.lower())]\n- item[\"opening_hours\"].add_ranges_from_string(hours_string)\n- apply_yes_no(Extras.DRIVE_THROUGH, item, location[\"has_drive_through\"] == \"1\", False)\n+ if \"test-location\" in item[\"ref\"]:\n+ return\n+ item[\"ref\"] = location.get(\"c_pagesURL\")\n+ item[\"name\"] = location.get(\"c_searchName\")\n+ item[\"website\"] = location.get(\"c_pagesURL\")\n+ item.pop(\"twitter\", None)\n yield item\n", "issue": "primanti_bros_us: switch to YextSpider as Where2GetIt seemingly no longer used\nThe store locator at `https://restaurants.primantibros.com/search` now uses Yext APIs for querying store locations, not Where2GetIt.\n", "before_files": [{"content": "from locations.categories import Extras, apply_yes_no\nfrom locations.hours import DAYS_FULL, OpeningHours\nfrom locations.storefinders.where2getit import Where2GetItSpider\n\n\nclass PrimantiBrosUSSpider(Where2GetItSpider):\n name = \"primanti_bros_us\"\n item_attributes = {\"brand\": \"Primanti Bros\", \"brand_wikidata\": \"Q7243049\"}\n api_brand_name = \"primantibros\"\n api_key = \"7CDBB1A2-4AC6-11EB-932C-8917919C4603\"\n\n def parse_item(self, item, location):\n item[\"ref\"] = location[\"uid\"]\n item[\"street_address\"] = \", \".join(filter(None, [location.get(\"address1\"), location.get(\"address2\")]))\n item[\"website\"] = location.get(\"menuurl\")\n item[\"opening_hours\"] = OpeningHours()\n hours_string = \"\"\n for day_name in DAYS_FULL:\n hours_string = f\"{hours_string} {day_name}: \" + location[\"{}hours\".format(day_name.lower())]\n item[\"opening_hours\"].add_ranges_from_string(hours_string)\n apply_yes_no(Extras.DRIVE_THROUGH, item, location[\"has_drive_through\"] == \"1\", False)\n yield item\n", "path": "locations/spiders/primanti_bros_us.py"}], "after_files": [{"content": "from locations.categories import Categories\nfrom locations.storefinders.yext import YextSpider\n\n\nclass PrimantiBrosUSSpider(YextSpider):\n name = \"primanti_bros_us\"\n item_attributes = {\"brand\": \"Primanti Bros\", \"brand_wikidata\": \"Q7243049\", \"extras\": Categories.RESTAURANT.value}\n api_key = \"7515c25fc685bbdd7c5975b6573c6912\"\n api_version = \"20220511\"\n\n def parse_item(self, item, location):\n if \"test-location\" in item[\"ref\"]:\n return\n item[\"ref\"] = location.get(\"c_pagesURL\")\n item[\"name\"] = location.get(\"c_searchName\")\n item[\"website\"] = location.get(\"c_pagesURL\")\n item.pop(\"twitter\", None)\n yield item\n", "path": "locations/spiders/primanti_bros_us.py"}]} | 648 | 564 |
gh_patches_debug_20381 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-663 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Track when an exception occurs in a Celery task
Similar to how we do this in other libraries
`tracked_request.tag("error", "true")`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/celery.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5
6 from celery.signals import before_task_publish, task_postrun, task_prerun
7
8 import scout_apm.core
9 from scout_apm.compat import datetime_to_timestamp
10 from scout_apm.core.config import scout_config
11 from scout_apm.core.tracked_request import TrackedRequest
12
13
14 def before_task_publish_callback(headers=None, properties=None, **kwargs):
15 if "scout_task_start" not in headers:
16 headers["scout_task_start"] = datetime_to_timestamp(dt.datetime.utcnow())
17
18
19 def task_prerun_callback(task=None, **kwargs):
20 tracked_request = TrackedRequest.instance()
21 tracked_request.is_real_request = True
22
23 start = getattr(task.request, "scout_task_start", None)
24 if start is not None:
25 now = datetime_to_timestamp(dt.datetime.utcnow())
26 try:
27 queue_time = now - start
28 except TypeError:
29 pass
30 else:
31 tracked_request.tag("queue_time", queue_time)
32
33 task_id = getattr(task.request, "id", None)
34 if task_id:
35 tracked_request.tag("task_id", task_id)
36 parent_task_id = getattr(task.request, "parent_id", None)
37 if parent_task_id:
38 tracked_request.tag("parent_task_id", parent_task_id)
39
40 delivery_info = task.request.delivery_info
41 tracked_request.tag("is_eager", delivery_info.get("is_eager", False))
42 tracked_request.tag("exchange", delivery_info.get("exchange", "unknown"))
43 tracked_request.tag("priority", delivery_info.get("priority", "unknown"))
44 tracked_request.tag("routing_key", delivery_info.get("routing_key", "unknown"))
45 tracked_request.tag("queue", delivery_info.get("queue", "unknown"))
46
47 tracked_request.start_span(operation=("Job/" + task.name))
48
49
50 def task_postrun_callback(task=None, **kwargs):
51 tracked_request = TrackedRequest.instance()
52 tracked_request.stop_span()
53
54
55 def install(app=None):
56 if app is not None:
57 copy_configuration(app)
58
59 installed = scout_apm.core.install()
60 if not installed:
61 return
62
63 before_task_publish.connect(before_task_publish_callback)
64 task_prerun.connect(task_prerun_callback)
65 task_postrun.connect(task_postrun_callback)
66
67
68 def copy_configuration(app):
69 prefix = "scout_"
70 prefix_len = len(prefix)
71
72 to_set = {}
73 for key, value in app.conf.items():
74 key_lower = key.lower()
75 if key_lower.startswith(prefix) and len(key_lower) > prefix_len:
76 scout_key = key_lower[prefix_len:]
77 to_set[scout_key] = value
78
79 scout_config.set(**to_set)
80
81
82 def uninstall():
83 before_task_publish.disconnect(before_task_publish_callback)
84 task_prerun.disconnect(task_prerun_callback)
85 task_postrun.disconnect(task_postrun_callback)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py
--- a/src/scout_apm/celery.py
+++ b/src/scout_apm/celery.py
@@ -3,7 +3,7 @@
import datetime as dt
-from celery.signals import before_task_publish, task_postrun, task_prerun
+from celery.signals import before_task_publish, task_failure, task_postrun, task_prerun
import scout_apm.core
from scout_apm.compat import datetime_to_timestamp
@@ -52,6 +52,11 @@
tracked_request.stop_span()
+def task_failure_callback(task_id=None, **kwargs):
+ tracked_request = TrackedRequest.instance()
+ tracked_request.tag("error", "true")
+
+
def install(app=None):
if app is not None:
copy_configuration(app)
@@ -62,6 +67,7 @@
before_task_publish.connect(before_task_publish_callback)
task_prerun.connect(task_prerun_callback)
+ task_failure.connect(task_failure_callback)
task_postrun.connect(task_postrun_callback)
| {"golden_diff": "diff --git a/src/scout_apm/celery.py b/src/scout_apm/celery.py\n--- a/src/scout_apm/celery.py\n+++ b/src/scout_apm/celery.py\n@@ -3,7 +3,7 @@\n \n import datetime as dt\n \n-from celery.signals import before_task_publish, task_postrun, task_prerun\n+from celery.signals import before_task_publish, task_failure, task_postrun, task_prerun\n \n import scout_apm.core\n from scout_apm.compat import datetime_to_timestamp\n@@ -52,6 +52,11 @@\n tracked_request.stop_span()\n \n \n+def task_failure_callback(task_id=None, **kwargs):\n+ tracked_request = TrackedRequest.instance()\n+ tracked_request.tag(\"error\", \"true\")\n+\n+\n def install(app=None):\n if app is not None:\n copy_configuration(app)\n@@ -62,6 +67,7 @@\n \n before_task_publish.connect(before_task_publish_callback)\n task_prerun.connect(task_prerun_callback)\n+ task_failure.connect(task_failure_callback)\n task_postrun.connect(task_postrun_callback)\n", "issue": "Track when an exception occurs in a Celery task\nSimilar to how we do this in other libraries\r\n`tracked_request.tag(\"error\", \"true\")`\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"priority\", delivery_info.get(\"priority\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef install(app=None):\n if app is not None:\n copy_configuration(app)\n\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_task_publish_callback)\n task_prerun.connect(task_prerun_callback)\n task_postrun.connect(task_postrun_callback)\n\n\ndef copy_configuration(app):\n prefix = \"scout_\"\n prefix_len = len(prefix)\n\n to_set = {}\n for key, value in app.conf.items():\n key_lower = key.lower()\n if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n scout_key = key_lower[prefix_len:]\n to_set[scout_key] = value\n\n scout_config.set(**to_set)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_task_publish_callback)\n task_prerun.disconnect(task_prerun_callback)\n task_postrun.disconnect(task_postrun_callback)\n", "path": "src/scout_apm/celery.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\n\nfrom celery.signals import before_task_publish, task_failure, task_postrun, task_prerun\n\nimport scout_apm.core\nfrom scout_apm.compat import datetime_to_timestamp\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.tracked_request import TrackedRequest\n\n\ndef before_task_publish_callback(headers=None, properties=None, **kwargs):\n if \"scout_task_start\" not in headers:\n headers[\"scout_task_start\"] = datetime_to_timestamp(dt.datetime.utcnow())\n\n\ndef task_prerun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n start = getattr(task.request, \"scout_task_start\", None)\n if start is not None:\n now = datetime_to_timestamp(dt.datetime.utcnow())\n try:\n queue_time = now - start\n except TypeError:\n pass\n else:\n tracked_request.tag(\"queue_time\", queue_time)\n\n task_id = getattr(task.request, \"id\", None)\n if task_id:\n tracked_request.tag(\"task_id\", task_id)\n parent_task_id = getattr(task.request, \"parent_id\", None)\n if parent_task_id:\n tracked_request.tag(\"parent_task_id\", parent_task_id)\n\n delivery_info = task.request.delivery_info\n tracked_request.tag(\"is_eager\", delivery_info.get(\"is_eager\", False))\n tracked_request.tag(\"exchange\", delivery_info.get(\"exchange\", \"unknown\"))\n tracked_request.tag(\"priority\", delivery_info.get(\"priority\", \"unknown\"))\n tracked_request.tag(\"routing_key\", delivery_info.get(\"routing_key\", \"unknown\"))\n tracked_request.tag(\"queue\", delivery_info.get(\"queue\", \"unknown\"))\n\n tracked_request.start_span(operation=(\"Job/\" + task.name))\n\n\ndef task_postrun_callback(task=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.stop_span()\n\n\ndef task_failure_callback(task_id=None, **kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.tag(\"error\", \"true\")\n\n\ndef install(app=None):\n if app is not None:\n copy_configuration(app)\n\n installed = scout_apm.core.install()\n if not installed:\n return\n\n before_task_publish.connect(before_task_publish_callback)\n task_prerun.connect(task_prerun_callback)\n task_failure.connect(task_failure_callback)\n task_postrun.connect(task_postrun_callback)\n\n\ndef copy_configuration(app):\n prefix = \"scout_\"\n prefix_len = len(prefix)\n\n to_set = {}\n for key, value in app.conf.items():\n key_lower = key.lower()\n if key_lower.startswith(prefix) and len(key_lower) > prefix_len:\n scout_key = key_lower[prefix_len:]\n to_set[scout_key] = value\n\n scout_config.set(**to_set)\n\n\ndef uninstall():\n before_task_publish.disconnect(before_task_publish_callback)\n task_prerun.disconnect(task_prerun_callback)\n task_postrun.disconnect(task_postrun_callback)\n", "path": "src/scout_apm/celery.py"}]} | 1,092 | 248 |
gh_patches_debug_29666 | rasdani/github-patches | git_diff | cupy__cupy-7597 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`pip install cupy-wheel` - "not a valid wheel filename"
### Description
When I try to `pip install cupy-wheel` I get the error `cupy_wheel<...>.whl is not a valid wheel filename`.
### To Reproduce
```bash
pip install cupy-wheel
```
OS: Windows 10
Python: 3.8.9
pip: 23.1.2
CUDA: CUDA 11.7
### Installation
None
### Environment
Unable to install using `cupy-wheel`. Can install using `pip install cupy-cuda11x`.
### Additional Information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install/universal_pkg/setup.py`
Content:
```
1 import ctypes
2 import pkg_resources
3 import os
4 import sys
5 from typing import Dict, List, Optional
6
7 from setuptools import setup
8
9
10 VERSION = '13.0.0a1'
11
12 # List of packages supported by this version of CuPy.
13 PACKAGES = [
14 'cupy-cuda102',
15 'cupy-cuda110',
16 'cupy-cuda111',
17 'cupy-cuda11x',
18 'cupy-cuda12x',
19 'cupy-rocm-4-3',
20 'cupy-rocm-5-0',
21 ]
22
23 # List of packages NOT supported by this version of CuPy.
24 PACKAGES_OUTDATED = [
25 'cupy-cuda80',
26 'cupy-cuda90',
27 'cupy-cuda91',
28 'cupy-cuda92',
29 'cupy-cuda100',
30 'cupy-cuda101',
31 'cupy-cuda112',
32 'cupy-cuda113',
33 'cupy-cuda114',
34 'cupy-cuda115',
35 'cupy-cuda116',
36 'cupy-cuda117',
37 'cupy-rocm-4-0',
38 'cupy-rocm-4-2',
39 ]
40
41 # List of sdist packages.
42 PACKAGES_SDIST = [
43 'cupy',
44 ]
45
46
47 class AutoDetectionFailed(Exception):
48 def __str__(self) -> str:
49 return f'''
50 ============================================================
51 {super().__str__()}
52 ============================================================
53 '''
54
55
56 def _log(msg: str) -> None:
57 sys.stdout.write(f'[cupy-wheel] {msg}\n')
58 sys.stdout.flush()
59
60
61 def _get_version_from_library(
62 libnames: List[str],
63 funcname: str,
64 nvrtc: bool = False,
65 ) -> Optional[int]:
66 """Returns the library version from list of candidate libraries."""
67
68 for libname in libnames:
69 try:
70 _log(f'Looking for library: {libname}')
71 runtime_so = ctypes.CDLL(libname)
72 break
73 except Exception as e:
74 _log(f'Failed to open {libname}: {e}')
75 else:
76 _log('No more candidate library to find')
77 return None
78
79 func = getattr(runtime_so, funcname, None)
80 if func is None:
81 raise AutoDetectionFailed(
82 f'{libname}: {func} could not be found')
83 func.restype = ctypes.c_int
84
85 if nvrtc:
86 # nvrtcVersion
87 func.argtypes = [
88 ctypes.POINTER(ctypes.c_int),
89 ctypes.POINTER(ctypes.c_int),
90 ]
91 major = ctypes.c_int()
92 minor = ctypes.c_int()
93 retval = func(major, minor)
94 version = major.value * 1000 + minor.value * 10
95 else:
96 # cudaRuntimeGetVersion
97 func.argtypes = [
98 ctypes.POINTER(ctypes.c_int),
99 ]
100 version_ref = ctypes.c_int()
101 retval = func(version_ref)
102 version = version_ref.value
103
104 if retval != 0: # NVRTC_SUCCESS or cudaSuccess
105 raise AutoDetectionFailed(
106 f'{libname}: {func} returned error: {retval}')
107 _log(f'Detected version: {version}')
108 return version
109
110
111 def _setup_win32_dll_directory() -> None:
112 if not hasattr(os, 'add_dll_directory'):
113 # Python 3.7 or earlier.
114 return
115 cuda_path = os.environ.get('CUDA_PATH', None)
116 if cuda_path is None:
117 _log('CUDA_PATH is not set.'
118 'cupy-wheel may not be able to discover NVRTC to probe version')
119 return
120 os.add_dll_directory(os.path.join(cuda_path, 'bin')) # type: ignore[attr-defined] # NOQA
121
122
123 def _get_cuda_version() -> Optional[int]:
124 """Returns the detected CUDA version or None."""
125
126 if sys.platform == 'linux':
127 libnames = [
128 'libnvrtc.so.12',
129 'libnvrtc.so.11.2',
130 'libnvrtc.so.11.1',
131 'libnvrtc.so.11.0',
132 'libnvrtc.so.10.2',
133 ]
134 elif sys.platform == 'win32':
135 libnames = [
136 'nvrtc64_120_0.dll',
137 'nvrtc64_112_0.dll',
138 'nvrtc64_111_0.dll',
139 'nvrtc64_110_0.dll',
140 'nvrtc64_102_0.dll',
141 ]
142 _setup_win32_dll_directory()
143 else:
144 _log(f'CUDA detection unsupported on platform: {sys.platform}')
145 return None
146 _log(f'Trying to detect CUDA version from libraries: {libnames}')
147 version = _get_version_from_library(libnames, 'nvrtcVersion', True)
148 return version
149
150
151 def _get_rocm_version() -> Optional[int]:
152 """Returns the detected ROCm version or None."""
153 if sys.platform == 'linux':
154 libnames = ['libamdhip64.so']
155 else:
156 _log(f'ROCm detection unsupported on platform: {sys.platform}')
157 return None
158 version = _get_version_from_library(libnames, 'hipRuntimeGetVersion')
159 return version
160
161
162 def _find_installed_packages() -> List[str]:
163 """Returns the list of CuPy packages installed in the environment."""
164
165 found = []
166 for pkg in (PACKAGES + PACKAGES_OUTDATED + PACKAGES_SDIST):
167 try:
168 pkg_resources.get_distribution(pkg)
169 found.append(pkg)
170 except pkg_resources.DistributionNotFound:
171 pass
172 return found
173
174
175 def _cuda_version_to_package(ver: int) -> str:
176 if ver < 10020:
177 raise AutoDetectionFailed(
178 f'Your CUDA version ({ver}) is too old.')
179 elif ver < 11000:
180 # CUDA 10.2
181 suffix = '102'
182 elif ver < 11010:
183 # CUDA 11.0
184 suffix = '110'
185 elif ver < 11020:
186 # CUDA 11.1
187 suffix = '111'
188 elif ver < 12000:
189 # CUDA 11.2 ~ 11.x
190 suffix = '11x'
191 elif ver < 13000:
192 # CUDA 12.x
193 suffix = '12x'
194 else:
195 raise AutoDetectionFailed(
196 f'Your CUDA version ({ver}) is too new.')
197 return f'cupy-cuda{suffix}'
198
199
200 def _rocm_version_to_package(ver: int) -> str:
201 """
202 ROCm 4.0.x = 3212
203 ROCm 4.1.x = 3241
204 ROCm 4.2.0 = 3275
205 ROCm 4.3.0 = 40321300
206 ROCm 4.3.1 = 40321331
207 ROCm 4.5.0 = 40421401
208 ROCm 4.5.1 = 40421432
209 ROCm 5.0.0 = 50013601
210 ROCm 5.1.0 = 50120531
211 """
212 if 4_03_00000 <= ver < 4_04_00000:
213 # ROCm 4.3
214 suffix = '4-3'
215 elif 5_00_00000 <= ver < 5_01_00000:
216 # ROCm 5.0
217 suffix = '5-0'
218 else:
219 raise AutoDetectionFailed(
220 f'Your ROCm version ({ver}) is unsupported.')
221 return f'cupy-rocm-{suffix}'
222
223
224 def infer_best_package() -> str:
225 """Returns the appropriate CuPy wheel package name for the environment."""
226
227 # Find the existing CuPy wheel installation.
228 installed = _find_installed_packages()
229 if 1 < len(installed):
230 raise AutoDetectionFailed(
231 'You have multiple CuPy packages installed: \n'
232 f' {installed}\n'
233 'Please uninstall all of them first, then try reinstalling.')
234
235 elif 1 == len(installed):
236 if installed[0] in PACKAGES_SDIST:
237 raise AutoDetectionFailed(
238 'You already have CuPy installed via source'
239 ' (pip install cupy).')
240 if installed[0] in PACKAGES_OUTDATED:
241 raise AutoDetectionFailed(
242 f'You have CuPy package "{installed[0]}" installed, but the'
243 f' package is not available for version {VERSION}.\n'
244 'Hint: cupy-cuda{112~117} has been merged to cupy-cuda11x in '
245 'CuPy v11. Uninstall the package and try again.')
246 return installed[0]
247
248 # Try CUDA.
249 version = _get_cuda_version()
250 if version is not None:
251 return _cuda_version_to_package(version)
252
253 # Try ROCm.
254 version = _get_rocm_version()
255 if version is not None:
256 return _rocm_version_to_package(version)
257
258 raise AutoDetectionFailed(
259 'Unable to detect NVIDIA CUDA or AMD ROCm installation.')
260
261
262 def _get_cmdclass(tag: str) -> Dict[str, type]:
263 try:
264 import wheel.bdist_wheel
265 except ModuleNotFoundError:
266 return {}
267
268 class bdist_wheel_with_tag(wheel.bdist_wheel.bdist_wheel): # type: ignore[misc] # NOQA
269 def initialize_options(self) -> None:
270 super().initialize_options()
271 self.build_number = f'0_{tag}'
272
273 return {"bdist_wheel": bdist_wheel_with_tag}
274
275
276 #
277 # Entrypoint
278 #
279
280 def main() -> None:
281 if os.environ.get('CUPY_UNIVERSAL_PKG_BUILD', None) is None:
282 package = infer_best_package()
283 requires = f'{package}=={VERSION}'
284 _log(f'Installing package: {requires}')
285 install_requires = [requires]
286 tag = package
287 else:
288 _log('Building cupy-wheel package for release.')
289 install_requires = []
290 tag = '0'
291
292 setup(
293 name='cupy-wheel',
294 version=f'{VERSION}',
295 install_requires=install_requires,
296 cmdclass=_get_cmdclass(tag),
297 )
298
299
300 if __name__ == '__main__':
301 main()
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/install/universal_pkg/setup.py b/install/universal_pkg/setup.py
--- a/install/universal_pkg/setup.py
+++ b/install/universal_pkg/setup.py
@@ -2,12 +2,13 @@
import pkg_resources
import os
import sys
-from typing import Dict, List, Optional
+from typing import List, Optional
from setuptools import setup
VERSION = '13.0.0a1'
+META_VERSION = VERSION
# List of packages supported by this version of CuPy.
PACKAGES = [
@@ -259,20 +260,6 @@
'Unable to detect NVIDIA CUDA or AMD ROCm installation.')
-def _get_cmdclass(tag: str) -> Dict[str, type]:
- try:
- import wheel.bdist_wheel
- except ModuleNotFoundError:
- return {}
-
- class bdist_wheel_with_tag(wheel.bdist_wheel.bdist_wheel): # type: ignore[misc] # NOQA
- def initialize_options(self) -> None:
- super().initialize_options()
- self.build_number = f'0_{tag}'
-
- return {"bdist_wheel": bdist_wheel_with_tag}
-
-
#
# Entrypoint
#
@@ -283,17 +270,14 @@
requires = f'{package}=={VERSION}'
_log(f'Installing package: {requires}')
install_requires = [requires]
- tag = package
else:
_log('Building cupy-wheel package for release.')
install_requires = []
- tag = '0'
setup(
name='cupy-wheel',
- version=f'{VERSION}',
+ version=META_VERSION,
install_requires=install_requires,
- cmdclass=_get_cmdclass(tag),
)
| {"golden_diff": "diff --git a/install/universal_pkg/setup.py b/install/universal_pkg/setup.py\n--- a/install/universal_pkg/setup.py\n+++ b/install/universal_pkg/setup.py\n@@ -2,12 +2,13 @@\n import pkg_resources\n import os\n import sys\n-from typing import Dict, List, Optional\n+from typing import List, Optional\n \n from setuptools import setup\n \n \n VERSION = '13.0.0a1'\n+META_VERSION = VERSION\n \n # List of packages supported by this version of CuPy.\n PACKAGES = [\n@@ -259,20 +260,6 @@\n 'Unable to detect NVIDIA CUDA or AMD ROCm installation.')\n \n \n-def _get_cmdclass(tag: str) -> Dict[str, type]:\n- try:\n- import wheel.bdist_wheel\n- except ModuleNotFoundError:\n- return {}\n-\n- class bdist_wheel_with_tag(wheel.bdist_wheel.bdist_wheel): # type: ignore[misc] # NOQA\n- def initialize_options(self) -> None:\n- super().initialize_options()\n- self.build_number = f'0_{tag}'\n-\n- return {\"bdist_wheel\": bdist_wheel_with_tag}\n-\n-\n #\n # Entrypoint\n #\n@@ -283,17 +270,14 @@\n requires = f'{package}=={VERSION}'\n _log(f'Installing package: {requires}')\n install_requires = [requires]\n- tag = package\n else:\n _log('Building cupy-wheel package for release.')\n install_requires = []\n- tag = '0'\n \n setup(\n name='cupy-wheel',\n- version=f'{VERSION}',\n+ version=META_VERSION,\n install_requires=install_requires,\n- cmdclass=_get_cmdclass(tag),\n )\n", "issue": "`pip install cupy-wheel` - \"not a valid wheel filename\"\n### Description\n\nWhen I try to `pip install cupy-wheel` I get the error `cupy_wheel<...>.whl is not a valid wheel filename`.\n\n### To Reproduce\n\n```bash\r\npip install cupy-wheel\r\n```\r\nOS: Windows 10\r\nPython: 3.8.9\r\npip: 23.1.2\r\nCUDA: CUDA 11.7\n\n### Installation\n\nNone\n\n### Environment\n\nUnable to install using `cupy-wheel`. Can install using `pip install cupy-cuda11x`.\n\n### Additional Information\n\n_No response_\n", "before_files": [{"content": "import ctypes\nimport pkg_resources\nimport os\nimport sys\nfrom typing import Dict, List, Optional\n\nfrom setuptools import setup\n\n\nVERSION = '13.0.0a1'\n\n# List of packages supported by this version of CuPy.\nPACKAGES = [\n 'cupy-cuda102',\n 'cupy-cuda110',\n 'cupy-cuda111',\n 'cupy-cuda11x',\n 'cupy-cuda12x',\n 'cupy-rocm-4-3',\n 'cupy-rocm-5-0',\n]\n\n# List of packages NOT supported by this version of CuPy.\nPACKAGES_OUTDATED = [\n 'cupy-cuda80',\n 'cupy-cuda90',\n 'cupy-cuda91',\n 'cupy-cuda92',\n 'cupy-cuda100',\n 'cupy-cuda101',\n 'cupy-cuda112',\n 'cupy-cuda113',\n 'cupy-cuda114',\n 'cupy-cuda115',\n 'cupy-cuda116',\n 'cupy-cuda117',\n 'cupy-rocm-4-0',\n 'cupy-rocm-4-2',\n]\n\n# List of sdist packages.\nPACKAGES_SDIST = [\n 'cupy',\n]\n\n\nclass AutoDetectionFailed(Exception):\n def __str__(self) -> str:\n return f'''\n============================================================\n{super().__str__()}\n============================================================\n'''\n\n\ndef _log(msg: str) -> None:\n sys.stdout.write(f'[cupy-wheel] {msg}\\n')\n sys.stdout.flush()\n\n\ndef _get_version_from_library(\n libnames: List[str],\n funcname: str,\n nvrtc: bool = False,\n) -> Optional[int]:\n \"\"\"Returns the library version from list of candidate libraries.\"\"\"\n\n for libname in libnames:\n try:\n _log(f'Looking for library: {libname}')\n runtime_so = ctypes.CDLL(libname)\n break\n except Exception as e:\n _log(f'Failed to open {libname}: {e}')\n else:\n _log('No more candidate library to find')\n return None\n\n func = getattr(runtime_so, funcname, None)\n if func is None:\n raise AutoDetectionFailed(\n f'{libname}: {func} could not be found')\n func.restype = ctypes.c_int\n\n if nvrtc:\n # nvrtcVersion\n func.argtypes = [\n ctypes.POINTER(ctypes.c_int),\n ctypes.POINTER(ctypes.c_int),\n ]\n major = ctypes.c_int()\n minor = ctypes.c_int()\n retval = func(major, minor)\n version = major.value * 1000 + minor.value * 10\n else:\n # cudaRuntimeGetVersion\n func.argtypes = [\n ctypes.POINTER(ctypes.c_int),\n ]\n version_ref = ctypes.c_int()\n retval = func(version_ref)\n version = version_ref.value\n\n if retval != 0: # NVRTC_SUCCESS or cudaSuccess\n raise AutoDetectionFailed(\n f'{libname}: {func} returned error: {retval}')\n _log(f'Detected version: {version}')\n return version\n\n\ndef _setup_win32_dll_directory() -> None:\n if not hasattr(os, 'add_dll_directory'):\n # Python 3.7 or earlier.\n return\n cuda_path = os.environ.get('CUDA_PATH', None)\n if cuda_path is None:\n _log('CUDA_PATH is not set.'\n 'cupy-wheel may not be able to discover NVRTC to probe version')\n return\n os.add_dll_directory(os.path.join(cuda_path, 'bin')) # type: ignore[attr-defined] # NOQA\n\n\ndef _get_cuda_version() -> Optional[int]:\n \"\"\"Returns the detected CUDA version or None.\"\"\"\n\n if sys.platform == 'linux':\n libnames = [\n 'libnvrtc.so.12',\n 'libnvrtc.so.11.2',\n 'libnvrtc.so.11.1',\n 'libnvrtc.so.11.0',\n 'libnvrtc.so.10.2',\n ]\n elif sys.platform == 'win32':\n libnames = [\n 'nvrtc64_120_0.dll',\n 'nvrtc64_112_0.dll',\n 'nvrtc64_111_0.dll',\n 'nvrtc64_110_0.dll',\n 'nvrtc64_102_0.dll',\n ]\n _setup_win32_dll_directory()\n else:\n _log(f'CUDA detection unsupported on platform: {sys.platform}')\n return None\n _log(f'Trying to detect CUDA version from libraries: {libnames}')\n version = _get_version_from_library(libnames, 'nvrtcVersion', True)\n return version\n\n\ndef _get_rocm_version() -> Optional[int]:\n \"\"\"Returns the detected ROCm version or None.\"\"\"\n if sys.platform == 'linux':\n libnames = ['libamdhip64.so']\n else:\n _log(f'ROCm detection unsupported on platform: {sys.platform}')\n return None\n version = _get_version_from_library(libnames, 'hipRuntimeGetVersion')\n return version\n\n\ndef _find_installed_packages() -> List[str]:\n \"\"\"Returns the list of CuPy packages installed in the environment.\"\"\"\n\n found = []\n for pkg in (PACKAGES + PACKAGES_OUTDATED + PACKAGES_SDIST):\n try:\n pkg_resources.get_distribution(pkg)\n found.append(pkg)\n except pkg_resources.DistributionNotFound:\n pass\n return found\n\n\ndef _cuda_version_to_package(ver: int) -> str:\n if ver < 10020:\n raise AutoDetectionFailed(\n f'Your CUDA version ({ver}) is too old.')\n elif ver < 11000:\n # CUDA 10.2\n suffix = '102'\n elif ver < 11010:\n # CUDA 11.0\n suffix = '110'\n elif ver < 11020:\n # CUDA 11.1\n suffix = '111'\n elif ver < 12000:\n # CUDA 11.2 ~ 11.x\n suffix = '11x'\n elif ver < 13000:\n # CUDA 12.x\n suffix = '12x'\n else:\n raise AutoDetectionFailed(\n f'Your CUDA version ({ver}) is too new.')\n return f'cupy-cuda{suffix}'\n\n\ndef _rocm_version_to_package(ver: int) -> str:\n \"\"\"\n ROCm 4.0.x = 3212\n ROCm 4.1.x = 3241\n ROCm 4.2.0 = 3275\n ROCm 4.3.0 = 40321300\n ROCm 4.3.1 = 40321331\n ROCm 4.5.0 = 40421401\n ROCm 4.5.1 = 40421432\n ROCm 5.0.0 = 50013601\n ROCm 5.1.0 = 50120531\n \"\"\"\n if 4_03_00000 <= ver < 4_04_00000:\n # ROCm 4.3\n suffix = '4-3'\n elif 5_00_00000 <= ver < 5_01_00000:\n # ROCm 5.0\n suffix = '5-0'\n else:\n raise AutoDetectionFailed(\n f'Your ROCm version ({ver}) is unsupported.')\n return f'cupy-rocm-{suffix}'\n\n\ndef infer_best_package() -> str:\n \"\"\"Returns the appropriate CuPy wheel package name for the environment.\"\"\"\n\n # Find the existing CuPy wheel installation.\n installed = _find_installed_packages()\n if 1 < len(installed):\n raise AutoDetectionFailed(\n 'You have multiple CuPy packages installed: \\n'\n f' {installed}\\n'\n 'Please uninstall all of them first, then try reinstalling.')\n\n elif 1 == len(installed):\n if installed[0] in PACKAGES_SDIST:\n raise AutoDetectionFailed(\n 'You already have CuPy installed via source'\n ' (pip install cupy).')\n if installed[0] in PACKAGES_OUTDATED:\n raise AutoDetectionFailed(\n f'You have CuPy package \"{installed[0]}\" installed, but the'\n f' package is not available for version {VERSION}.\\n'\n 'Hint: cupy-cuda{112~117} has been merged to cupy-cuda11x in '\n 'CuPy v11. Uninstall the package and try again.')\n return installed[0]\n\n # Try CUDA.\n version = _get_cuda_version()\n if version is not None:\n return _cuda_version_to_package(version)\n\n # Try ROCm.\n version = _get_rocm_version()\n if version is not None:\n return _rocm_version_to_package(version)\n\n raise AutoDetectionFailed(\n 'Unable to detect NVIDIA CUDA or AMD ROCm installation.')\n\n\ndef _get_cmdclass(tag: str) -> Dict[str, type]:\n try:\n import wheel.bdist_wheel\n except ModuleNotFoundError:\n return {}\n\n class bdist_wheel_with_tag(wheel.bdist_wheel.bdist_wheel): # type: ignore[misc] # NOQA\n def initialize_options(self) -> None:\n super().initialize_options()\n self.build_number = f'0_{tag}'\n\n return {\"bdist_wheel\": bdist_wheel_with_tag}\n\n\n#\n# Entrypoint\n#\n\ndef main() -> None:\n if os.environ.get('CUPY_UNIVERSAL_PKG_BUILD', None) is None:\n package = infer_best_package()\n requires = f'{package}=={VERSION}'\n _log(f'Installing package: {requires}')\n install_requires = [requires]\n tag = package\n else:\n _log('Building cupy-wheel package for release.')\n install_requires = []\n tag = '0'\n\n setup(\n name='cupy-wheel',\n version=f'{VERSION}',\n install_requires=install_requires,\n cmdclass=_get_cmdclass(tag),\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "install/universal_pkg/setup.py"}], "after_files": [{"content": "import ctypes\nimport pkg_resources\nimport os\nimport sys\nfrom typing import List, Optional\n\nfrom setuptools import setup\n\n\nVERSION = '13.0.0a1'\nMETA_VERSION = VERSION\n\n# List of packages supported by this version of CuPy.\nPACKAGES = [\n 'cupy-cuda102',\n 'cupy-cuda110',\n 'cupy-cuda111',\n 'cupy-cuda11x',\n 'cupy-cuda12x',\n 'cupy-rocm-4-3',\n 'cupy-rocm-5-0',\n]\n\n# List of packages NOT supported by this version of CuPy.\nPACKAGES_OUTDATED = [\n 'cupy-cuda80',\n 'cupy-cuda90',\n 'cupy-cuda91',\n 'cupy-cuda92',\n 'cupy-cuda100',\n 'cupy-cuda101',\n 'cupy-cuda112',\n 'cupy-cuda113',\n 'cupy-cuda114',\n 'cupy-cuda115',\n 'cupy-cuda116',\n 'cupy-cuda117',\n 'cupy-rocm-4-0',\n 'cupy-rocm-4-2',\n]\n\n# List of sdist packages.\nPACKAGES_SDIST = [\n 'cupy',\n]\n\n\nclass AutoDetectionFailed(Exception):\n def __str__(self) -> str:\n return f'''\n============================================================\n{super().__str__()}\n============================================================\n'''\n\n\ndef _log(msg: str) -> None:\n sys.stdout.write(f'[cupy-wheel] {msg}\\n')\n sys.stdout.flush()\n\n\ndef _get_version_from_library(\n libnames: List[str],\n funcname: str,\n nvrtc: bool = False,\n) -> Optional[int]:\n \"\"\"Returns the library version from list of candidate libraries.\"\"\"\n\n for libname in libnames:\n try:\n _log(f'Looking for library: {libname}')\n runtime_so = ctypes.CDLL(libname)\n break\n except Exception as e:\n _log(f'Failed to open {libname}: {e}')\n else:\n _log('No more candidate library to find')\n return None\n\n func = getattr(runtime_so, funcname, None)\n if func is None:\n raise AutoDetectionFailed(\n f'{libname}: {func} could not be found')\n func.restype = ctypes.c_int\n\n if nvrtc:\n # nvrtcVersion\n func.argtypes = [\n ctypes.POINTER(ctypes.c_int),\n ctypes.POINTER(ctypes.c_int),\n ]\n major = ctypes.c_int()\n minor = ctypes.c_int()\n retval = func(major, minor)\n version = major.value * 1000 + minor.value * 10\n else:\n # cudaRuntimeGetVersion\n func.argtypes = [\n ctypes.POINTER(ctypes.c_int),\n ]\n version_ref = ctypes.c_int()\n retval = func(version_ref)\n version = version_ref.value\n\n if retval != 0: # NVRTC_SUCCESS or cudaSuccess\n raise AutoDetectionFailed(\n f'{libname}: {func} returned error: {retval}')\n _log(f'Detected version: {version}')\n return version\n\n\ndef _setup_win32_dll_directory() -> None:\n if not hasattr(os, 'add_dll_directory'):\n # Python 3.7 or earlier.\n return\n cuda_path = os.environ.get('CUDA_PATH', None)\n if cuda_path is None:\n _log('CUDA_PATH is not set.'\n 'cupy-wheel may not be able to discover NVRTC to probe version')\n return\n os.add_dll_directory(os.path.join(cuda_path, 'bin')) # type: ignore[attr-defined] # NOQA\n\n\ndef _get_cuda_version() -> Optional[int]:\n \"\"\"Returns the detected CUDA version or None.\"\"\"\n\n if sys.platform == 'linux':\n libnames = [\n 'libnvrtc.so.12',\n 'libnvrtc.so.11.2',\n 'libnvrtc.so.11.1',\n 'libnvrtc.so.11.0',\n 'libnvrtc.so.10.2',\n ]\n elif sys.platform == 'win32':\n libnames = [\n 'nvrtc64_120_0.dll',\n 'nvrtc64_112_0.dll',\n 'nvrtc64_111_0.dll',\n 'nvrtc64_110_0.dll',\n 'nvrtc64_102_0.dll',\n ]\n _setup_win32_dll_directory()\n else:\n _log(f'CUDA detection unsupported on platform: {sys.platform}')\n return None\n _log(f'Trying to detect CUDA version from libraries: {libnames}')\n version = _get_version_from_library(libnames, 'nvrtcVersion', True)\n return version\n\n\ndef _get_rocm_version() -> Optional[int]:\n \"\"\"Returns the detected ROCm version or None.\"\"\"\n if sys.platform == 'linux':\n libnames = ['libamdhip64.so']\n else:\n _log(f'ROCm detection unsupported on platform: {sys.platform}')\n return None\n version = _get_version_from_library(libnames, 'hipRuntimeGetVersion')\n return version\n\n\ndef _find_installed_packages() -> List[str]:\n \"\"\"Returns the list of CuPy packages installed in the environment.\"\"\"\n\n found = []\n for pkg in (PACKAGES + PACKAGES_OUTDATED + PACKAGES_SDIST):\n try:\n pkg_resources.get_distribution(pkg)\n found.append(pkg)\n except pkg_resources.DistributionNotFound:\n pass\n return found\n\n\ndef _cuda_version_to_package(ver: int) -> str:\n if ver < 10020:\n raise AutoDetectionFailed(\n f'Your CUDA version ({ver}) is too old.')\n elif ver < 11000:\n # CUDA 10.2\n suffix = '102'\n elif ver < 11010:\n # CUDA 11.0\n suffix = '110'\n elif ver < 11020:\n # CUDA 11.1\n suffix = '111'\n elif ver < 12000:\n # CUDA 11.2 ~ 11.x\n suffix = '11x'\n elif ver < 13000:\n # CUDA 12.x\n suffix = '12x'\n else:\n raise AutoDetectionFailed(\n f'Your CUDA version ({ver}) is too new.')\n return f'cupy-cuda{suffix}'\n\n\ndef _rocm_version_to_package(ver: int) -> str:\n \"\"\"\n ROCm 4.0.x = 3212\n ROCm 4.1.x = 3241\n ROCm 4.2.0 = 3275\n ROCm 4.3.0 = 40321300\n ROCm 4.3.1 = 40321331\n ROCm 4.5.0 = 40421401\n ROCm 4.5.1 = 40421432\n ROCm 5.0.0 = 50013601\n ROCm 5.1.0 = 50120531\n \"\"\"\n if 4_03_00000 <= ver < 4_04_00000:\n # ROCm 4.3\n suffix = '4-3'\n elif 5_00_00000 <= ver < 5_01_00000:\n # ROCm 5.0\n suffix = '5-0'\n else:\n raise AutoDetectionFailed(\n f'Your ROCm version ({ver}) is unsupported.')\n return f'cupy-rocm-{suffix}'\n\n\ndef infer_best_package() -> str:\n \"\"\"Returns the appropriate CuPy wheel package name for the environment.\"\"\"\n\n # Find the existing CuPy wheel installation.\n installed = _find_installed_packages()\n if 1 < len(installed):\n raise AutoDetectionFailed(\n 'You have multiple CuPy packages installed: \\n'\n f' {installed}\\n'\n 'Please uninstall all of them first, then try reinstalling.')\n\n elif 1 == len(installed):\n if installed[0] in PACKAGES_SDIST:\n raise AutoDetectionFailed(\n 'You already have CuPy installed via source'\n ' (pip install cupy).')\n if installed[0] in PACKAGES_OUTDATED:\n raise AutoDetectionFailed(\n f'You have CuPy package \"{installed[0]}\" installed, but the'\n f' package is not available for version {VERSION}.\\n'\n 'Hint: cupy-cuda{112~117} has been merged to cupy-cuda11x in '\n 'CuPy v11. Uninstall the package and try again.')\n return installed[0]\n\n # Try CUDA.\n version = _get_cuda_version()\n if version is not None:\n return _cuda_version_to_package(version)\n\n # Try ROCm.\n version = _get_rocm_version()\n if version is not None:\n return _rocm_version_to_package(version)\n\n raise AutoDetectionFailed(\n 'Unable to detect NVIDIA CUDA or AMD ROCm installation.')\n\n\n#\n# Entrypoint\n#\n\ndef main() -> None:\n if os.environ.get('CUPY_UNIVERSAL_PKG_BUILD', None) is None:\n package = infer_best_package()\n requires = f'{package}=={VERSION}'\n _log(f'Installing package: {requires}')\n install_requires = [requires]\n else:\n _log('Building cupy-wheel package for release.')\n install_requires = []\n\n setup(\n name='cupy-wheel',\n version=META_VERSION,\n install_requires=install_requires,\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "install/universal_pkg/setup.py"}]} | 3,577 | 386 |
gh_patches_debug_12634 | rasdani/github-patches | git_diff | saleor__saleor-2201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to add a new address on checkout of digital goods
### What I'm trying to achieve
Enter a new billing address on checkout when ordering a digital good.
### Steps to reproduce the problem
1. Make sure you have a default billing address;
1. Add to cart a book;
1. Go to checkout;
1. At billing address step, create a new address with a different country;
1. Correct any errors if the default billing address fields differs from the new country;
1. Place order;
1. Shipping address should not have been created.
### What I expected to happen
Have the new address as billing address on order.
### What happened instead/how it failed
Got no address created, and got the previous address as billing address for order.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/checkout/views/summary.py`
Content:
```
1 from django.contrib import messages
2 from django.shortcuts import redirect
3 from django.template.response import TemplateResponse
4 from django.utils.translation import pgettext, pgettext_lazy
5
6 from ...account.forms import get_address_form
7 from ...account.models import Address
8 from ...core.exceptions import InsufficientStock
9 from ...order.emails import send_order_confirmation
10 from ..forms import (
11 AnonymousUserBillingForm, BillingAddressesForm,
12 BillingWithoutShippingAddressForm, NoteForm)
13
14
15 def create_order(checkout):
16 """Finalize a checkout session and create an order.
17
18 This is a helper function.
19
20 `checkout` is a `saleor.checkout.core.Checkout` instance.
21 """
22 order = checkout.create_order()
23 if not order:
24 return None, redirect('checkout:summary')
25 checkout.clear_storage()
26 checkout.cart.clear()
27 user = None if checkout.user.is_anonymous else checkout.user
28 msg = pgettext_lazy('Order status history entry', 'Order was placed')
29 order.history.create(user=user, content=msg)
30 send_order_confirmation.delay(order.pk)
31 return order, redirect('order:payment', token=order.token)
32
33
34 def handle_order_placement(request, checkout):
35 """Try to create an order and redirect the user as necessary.
36
37 This is a helper function.
38 """
39 try:
40 order, redirect_url = create_order(checkout)
41 except InsufficientStock:
42 return redirect('cart:index')
43 if not order:
44 msg = pgettext('Checkout warning', 'Please review your checkout.')
45 messages.warning(request, msg)
46 return redirect_url
47
48
49 def get_billing_forms_with_shipping(
50 data, addresses, billing_address, shipping_address):
51 """Get billing form based on a the current billing and shipping data."""
52 if billing_address == shipping_address:
53 address_form, preview = get_address_form(
54 data, country_code=shipping_address.country.code,
55 autocomplete_type='billing',
56 initial={'country': shipping_address.country.code},
57 instance=None)
58 addresses_form = BillingAddressesForm(
59 data, additional_addresses=addresses, initial={
60 'address': BillingAddressesForm.SHIPPING_ADDRESS})
61 elif billing_address.id is None:
62 address_form, preview = get_address_form(
63 data, country_code=billing_address.country.code,
64 autocomplete_type='billing',
65 initial={'country': billing_address.country.code},
66 instance=billing_address)
67 addresses_form = BillingAddressesForm(
68 data, additional_addresses=addresses, initial={
69 'address': BillingAddressesForm.NEW_ADDRESS})
70 else:
71 address_form, preview = get_address_form(
72 data, country_code=billing_address.country.code,
73 autocomplete_type='billing',
74 initial={'country': billing_address.country})
75 addresses_form = BillingAddressesForm(
76 data, additional_addresses=addresses, initial={
77 'address': billing_address.id})
78 if addresses_form.is_valid() and not preview:
79 address_id = addresses_form.cleaned_data['address']
80 if address_id == BillingAddressesForm.SHIPPING_ADDRESS:
81 return address_form, addresses_form, shipping_address
82 elif address_id != BillingAddressesForm.NEW_ADDRESS:
83 address = addresses.get(id=address_id)
84 return address_form, addresses_form, address
85 elif address_form.is_valid():
86 return address_form, addresses_form, address_form.instance
87 return address_form, addresses_form, None
88
89
90 def summary_with_shipping_view(request, checkout):
91 """Display order summary with billing forms for a logged in user.
92
93 Will create an order if all data is valid.
94 """
95 note_form = NoteForm(request.POST or None, checkout=checkout)
96 if note_form.is_valid():
97 note_form.set_checkout_note()
98
99 if request.user.is_authenticated:
100 additional_addresses = request.user.addresses.all()
101 else:
102 additional_addresses = Address.objects.none()
103 address_form, addresses_form, address = get_billing_forms_with_shipping(
104 request.POST or None, additional_addresses,
105 checkout.billing_address or Address(country=request.country),
106 checkout.shipping_address)
107 if address is not None:
108 checkout.billing_address = address
109 return handle_order_placement(request, checkout)
110 return TemplateResponse(
111 request, 'checkout/summary.html', context={
112 'addresses_form': addresses_form, 'address_form': address_form,
113 'checkout': checkout,
114 'additional_addresses': additional_addresses,
115 'note_form': note_form})
116
117
118 def anonymous_summary_without_shipping(request, checkout):
119 """Display order summary with billing forms for an unauthorized user.
120
121 Will create an order if all data is valid.
122 """
123 note_form = NoteForm(request.POST or None, checkout=checkout)
124 if note_form.is_valid():
125 note_form.set_checkout_note()
126 user_form = AnonymousUserBillingForm(
127 request.POST or None, initial={'email': checkout.email})
128 billing_address = checkout.billing_address
129 if billing_address:
130 address_form, preview = get_address_form(
131 request.POST or None, country_code=billing_address.country.code,
132 autocomplete_type='billing', instance=billing_address)
133 else:
134 address_form, preview = get_address_form(
135 request.POST or None, country_code=request.country.code,
136 autocomplete_type='billing', initial={'country': request.country})
137 if all([user_form.is_valid(), address_form.is_valid()]) and not preview:
138 checkout.email = user_form.cleaned_data['email']
139 checkout.billing_address = address_form.instance
140 return handle_order_placement(request, checkout)
141 return TemplateResponse(
142 request, 'checkout/summary_without_shipping.html', context={
143 'user_form': user_form, 'address_form': address_form,
144 'checkout': checkout,
145 'note_form': note_form})
146
147
148 def summary_without_shipping(request, checkout):
149 """Display order summary for cases where shipping is not required.
150
151 Will create an order if all data is valid.
152 """
153 note_form = NoteForm(request.POST or None, checkout=checkout)
154 if note_form.is_valid():
155 note_form.set_checkout_note()
156
157 billing_address = checkout.billing_address
158 user_addresses = request.user.addresses.all()
159 if billing_address and billing_address.id:
160 address_form, preview = get_address_form(
161 request.POST or None, autocomplete_type='billing',
162 initial={'country': request.country},
163 country_code=billing_address.country.code,
164 instance=billing_address)
165 addresses_form = BillingWithoutShippingAddressForm(
166 request.POST or None, additional_addresses=user_addresses,
167 initial={'address': billing_address.id})
168 elif billing_address:
169 address_form, preview = get_address_form(
170 request.POST or None, autocomplete_type='billing',
171 instance=billing_address,
172 country_code=billing_address.country.code)
173 addresses_form = BillingWithoutShippingAddressForm(
174 request.POST or None, additional_addresses=user_addresses)
175 else:
176 address_form, preview = get_address_form(
177 request.POST or None, autocomplete_type='billing',
178 initial={'country': request.country},
179 country_code=request.country.code)
180 addresses_form = BillingWithoutShippingAddressForm(
181 request.POST or None, additional_addresses=user_addresses)
182
183 if addresses_form.is_valid():
184 address_id = addresses_form.cleaned_data['address']
185 if address_id != BillingWithoutShippingAddressForm.NEW_ADDRESS:
186 checkout.billing_address = user_addresses.get(id=address_id)
187 return handle_order_placement(request, checkout)
188 elif address_form.is_valid() and not preview:
189 checkout.billing_address = address_form.instance
190 return handle_order_placement(request, checkout)
191 return TemplateResponse(
192 request, 'checkout/summary_without_shipping.html', context={
193 'addresses_form': addresses_form, 'address_form': address_form,
194 'checkout': checkout, 'additional_addresses': user_addresses,
195 'note_form': note_form})
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/checkout/views/summary.py b/saleor/checkout/views/summary.py
--- a/saleor/checkout/views/summary.py
+++ b/saleor/checkout/views/summary.py
@@ -160,8 +160,7 @@
address_form, preview = get_address_form(
request.POST or None, autocomplete_type='billing',
initial={'country': request.country},
- country_code=billing_address.country.code,
- instance=billing_address)
+ country_code=billing_address.country.code)
addresses_form = BillingWithoutShippingAddressForm(
request.POST or None, additional_addresses=user_addresses,
initial={'address': billing_address.id})
| {"golden_diff": "diff --git a/saleor/checkout/views/summary.py b/saleor/checkout/views/summary.py\n--- a/saleor/checkout/views/summary.py\n+++ b/saleor/checkout/views/summary.py\n@@ -160,8 +160,7 @@\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n initial={'country': request.country},\n- country_code=billing_address.country.code,\n- instance=billing_address)\n+ country_code=billing_address.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses,\n initial={'address': billing_address.id})\n", "issue": "Unable to add a new address on checkout of digital goods\n\r\n\r\n### What I'm trying to achieve\r\n\r\nEnter a new billing address on checkout when ordering a digital good.\r\n\r\n### Steps to reproduce the problem\r\n\r\n1. Make sure you have a default billing address;\r\n1. Add to cart a book;\r\n1. Go to checkout;\r\n1. At billing address step, create a new address with a different country;\r\n1. Correct any errors if the default billing address fields differs from the new country;\r\n1. Place order;\r\n1. Shipping address should not have been created.\r\n\r\n### What I expected to happen\r\nHave the new address as billing address on order.\r\n\r\n### What happened instead/how it failed\r\nGot no address created, and got the previous address as billing address for order.\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext, pgettext_lazy\n\nfrom ...account.forms import get_address_form\nfrom ...account.models import Address\nfrom ...core.exceptions import InsufficientStock\nfrom ...order.emails import send_order_confirmation\nfrom ..forms import (\n AnonymousUserBillingForm, BillingAddressesForm,\n BillingWithoutShippingAddressForm, NoteForm)\n\n\ndef create_order(checkout):\n \"\"\"Finalize a checkout session and create an order.\n\n This is a helper function.\n\n `checkout` is a `saleor.checkout.core.Checkout` instance.\n \"\"\"\n order = checkout.create_order()\n if not order:\n return None, redirect('checkout:summary')\n checkout.clear_storage()\n checkout.cart.clear()\n user = None if checkout.user.is_anonymous else checkout.user\n msg = pgettext_lazy('Order status history entry', 'Order was placed')\n order.history.create(user=user, content=msg)\n send_order_confirmation.delay(order.pk)\n return order, redirect('order:payment', token=order.token)\n\n\ndef handle_order_placement(request, checkout):\n \"\"\"Try to create an order and redirect the user as necessary.\n\n This is a helper function.\n \"\"\"\n try:\n order, redirect_url = create_order(checkout)\n except InsufficientStock:\n return redirect('cart:index')\n if not order:\n msg = pgettext('Checkout warning', 'Please review your checkout.')\n messages.warning(request, msg)\n return redirect_url\n\n\ndef get_billing_forms_with_shipping(\n data, addresses, billing_address, shipping_address):\n \"\"\"Get billing form based on a the current billing and shipping data.\"\"\"\n if billing_address == shipping_address:\n address_form, preview = get_address_form(\n data, country_code=shipping_address.country.code,\n autocomplete_type='billing',\n initial={'country': shipping_address.country.code},\n instance=None)\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': BillingAddressesForm.SHIPPING_ADDRESS})\n elif billing_address.id is None:\n address_form, preview = get_address_form(\n data, country_code=billing_address.country.code,\n autocomplete_type='billing',\n initial={'country': billing_address.country.code},\n instance=billing_address)\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': BillingAddressesForm.NEW_ADDRESS})\n else:\n address_form, preview = get_address_form(\n data, country_code=billing_address.country.code,\n autocomplete_type='billing',\n initial={'country': billing_address.country})\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': billing_address.id})\n if addresses_form.is_valid() and not preview:\n address_id = addresses_form.cleaned_data['address']\n if address_id == BillingAddressesForm.SHIPPING_ADDRESS:\n return address_form, addresses_form, shipping_address\n elif address_id != BillingAddressesForm.NEW_ADDRESS:\n address = addresses.get(id=address_id)\n return address_form, addresses_form, address\n elif address_form.is_valid():\n return address_form, addresses_form, address_form.instance\n return address_form, addresses_form, None\n\n\ndef summary_with_shipping_view(request, checkout):\n \"\"\"Display order summary with billing forms for a logged in user.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n\n if request.user.is_authenticated:\n additional_addresses = request.user.addresses.all()\n else:\n additional_addresses = Address.objects.none()\n address_form, addresses_form, address = get_billing_forms_with_shipping(\n request.POST or None, additional_addresses,\n checkout.billing_address or Address(country=request.country),\n checkout.shipping_address)\n if address is not None:\n checkout.billing_address = address\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary.html', context={\n 'addresses_form': addresses_form, 'address_form': address_form,\n 'checkout': checkout,\n 'additional_addresses': additional_addresses,\n 'note_form': note_form})\n\n\ndef anonymous_summary_without_shipping(request, checkout):\n \"\"\"Display order summary with billing forms for an unauthorized user.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n user_form = AnonymousUserBillingForm(\n request.POST or None, initial={'email': checkout.email})\n billing_address = checkout.billing_address\n if billing_address:\n address_form, preview = get_address_form(\n request.POST or None, country_code=billing_address.country.code,\n autocomplete_type='billing', instance=billing_address)\n else:\n address_form, preview = get_address_form(\n request.POST or None, country_code=request.country.code,\n autocomplete_type='billing', initial={'country': request.country})\n if all([user_form.is_valid(), address_form.is_valid()]) and not preview:\n checkout.email = user_form.cleaned_data['email']\n checkout.billing_address = address_form.instance\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary_without_shipping.html', context={\n 'user_form': user_form, 'address_form': address_form,\n 'checkout': checkout,\n 'note_form': note_form})\n\n\ndef summary_without_shipping(request, checkout):\n \"\"\"Display order summary for cases where shipping is not required.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n\n billing_address = checkout.billing_address\n user_addresses = request.user.addresses.all()\n if billing_address and billing_address.id:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n initial={'country': request.country},\n country_code=billing_address.country.code,\n instance=billing_address)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses,\n initial={'address': billing_address.id})\n elif billing_address:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n instance=billing_address,\n country_code=billing_address.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses)\n else:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n initial={'country': request.country},\n country_code=request.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses)\n\n if addresses_form.is_valid():\n address_id = addresses_form.cleaned_data['address']\n if address_id != BillingWithoutShippingAddressForm.NEW_ADDRESS:\n checkout.billing_address = user_addresses.get(id=address_id)\n return handle_order_placement(request, checkout)\n elif address_form.is_valid() and not preview:\n checkout.billing_address = address_form.instance\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary_without_shipping.html', context={\n 'addresses_form': addresses_form, 'address_form': address_form,\n 'checkout': checkout, 'additional_addresses': user_addresses,\n 'note_form': note_form})\n", "path": "saleor/checkout/views/summary.py"}], "after_files": [{"content": "from django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext, pgettext_lazy\n\nfrom ...account.forms import get_address_form\nfrom ...account.models import Address\nfrom ...core.exceptions import InsufficientStock\nfrom ...order.emails import send_order_confirmation\nfrom ..forms import (\n AnonymousUserBillingForm, BillingAddressesForm,\n BillingWithoutShippingAddressForm, NoteForm)\n\n\ndef create_order(checkout):\n \"\"\"Finalize a checkout session and create an order.\n\n This is a helper function.\n\n `checkout` is a `saleor.checkout.core.Checkout` instance.\n \"\"\"\n order = checkout.create_order()\n if not order:\n return None, redirect('checkout:summary')\n checkout.clear_storage()\n checkout.cart.clear()\n user = None if checkout.user.is_anonymous else checkout.user\n msg = pgettext_lazy('Order status history entry', 'Order was placed')\n order.history.create(user=user, content=msg)\n send_order_confirmation.delay(order.pk)\n return order, redirect('order:payment', token=order.token)\n\n\ndef handle_order_placement(request, checkout):\n \"\"\"Try to create an order and redirect the user as necessary.\n\n This is a helper function.\n \"\"\"\n try:\n order, redirect_url = create_order(checkout)\n except InsufficientStock:\n return redirect('cart:index')\n if not order:\n msg = pgettext('Checkout warning', 'Please review your checkout.')\n messages.warning(request, msg)\n return redirect_url\n\n\ndef get_billing_forms_with_shipping(\n data, addresses, billing_address, shipping_address):\n \"\"\"Get billing form based on a the current billing and shipping data.\"\"\"\n if billing_address == shipping_address:\n address_form, preview = get_address_form(\n data, country_code=shipping_address.country.code,\n autocomplete_type='billing',\n initial={'country': shipping_address.country.code},\n instance=None)\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': BillingAddressesForm.SHIPPING_ADDRESS})\n elif billing_address.id is None:\n address_form, preview = get_address_form(\n data, country_code=billing_address.country.code,\n autocomplete_type='billing',\n initial={'country': billing_address.country.code},\n instance=billing_address)\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': BillingAddressesForm.NEW_ADDRESS})\n else:\n address_form, preview = get_address_form(\n data, country_code=billing_address.country.code,\n autocomplete_type='billing',\n initial={'country': billing_address.country})\n addresses_form = BillingAddressesForm(\n data, additional_addresses=addresses, initial={\n 'address': billing_address.id})\n if addresses_form.is_valid() and not preview:\n address_id = addresses_form.cleaned_data['address']\n if address_id == BillingAddressesForm.SHIPPING_ADDRESS:\n return address_form, addresses_form, shipping_address\n elif address_id != BillingAddressesForm.NEW_ADDRESS:\n address = addresses.get(id=address_id)\n return address_form, addresses_form, address\n elif address_form.is_valid():\n return address_form, addresses_form, address_form.instance\n return address_form, addresses_form, None\n\n\ndef summary_with_shipping_view(request, checkout):\n \"\"\"Display order summary with billing forms for a logged in user.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n\n if request.user.is_authenticated:\n additional_addresses = request.user.addresses.all()\n else:\n additional_addresses = Address.objects.none()\n address_form, addresses_form, address = get_billing_forms_with_shipping(\n request.POST or None, additional_addresses,\n checkout.billing_address or Address(country=request.country),\n checkout.shipping_address)\n if address is not None:\n checkout.billing_address = address\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary.html', context={\n 'addresses_form': addresses_form, 'address_form': address_form,\n 'checkout': checkout,\n 'additional_addresses': additional_addresses,\n 'note_form': note_form})\n\n\ndef anonymous_summary_without_shipping(request, checkout):\n \"\"\"Display order summary with billing forms for an unauthorized user.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n user_form = AnonymousUserBillingForm(\n request.POST or None, initial={'email': checkout.email})\n billing_address = checkout.billing_address\n if billing_address:\n address_form, preview = get_address_form(\n request.POST or None, country_code=billing_address.country.code,\n autocomplete_type='billing', instance=billing_address)\n else:\n address_form, preview = get_address_form(\n request.POST or None, country_code=request.country.code,\n autocomplete_type='billing', initial={'country': request.country})\n if all([user_form.is_valid(), address_form.is_valid()]) and not preview:\n checkout.email = user_form.cleaned_data['email']\n checkout.billing_address = address_form.instance\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary_without_shipping.html', context={\n 'user_form': user_form, 'address_form': address_form,\n 'checkout': checkout,\n 'note_form': note_form})\n\n\ndef summary_without_shipping(request, checkout):\n \"\"\"Display order summary for cases where shipping is not required.\n\n Will create an order if all data is valid.\n \"\"\"\n note_form = NoteForm(request.POST or None, checkout=checkout)\n if note_form.is_valid():\n note_form.set_checkout_note()\n\n billing_address = checkout.billing_address\n user_addresses = request.user.addresses.all()\n if billing_address and billing_address.id:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n initial={'country': request.country},\n country_code=billing_address.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses,\n initial={'address': billing_address.id})\n elif billing_address:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n instance=billing_address,\n country_code=billing_address.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses)\n else:\n address_form, preview = get_address_form(\n request.POST or None, autocomplete_type='billing',\n initial={'country': request.country},\n country_code=request.country.code)\n addresses_form = BillingWithoutShippingAddressForm(\n request.POST or None, additional_addresses=user_addresses)\n\n if addresses_form.is_valid():\n address_id = addresses_form.cleaned_data['address']\n if address_id != BillingWithoutShippingAddressForm.NEW_ADDRESS:\n checkout.billing_address = user_addresses.get(id=address_id)\n return handle_order_placement(request, checkout)\n elif address_form.is_valid() and not preview:\n checkout.billing_address = address_form.instance\n return handle_order_placement(request, checkout)\n return TemplateResponse(\n request, 'checkout/summary_without_shipping.html', context={\n 'addresses_form': addresses_form, 'address_form': address_form,\n 'checkout': checkout, 'additional_addresses': user_addresses,\n 'note_form': note_form})\n", "path": "saleor/checkout/views/summary.py"}]} | 2,508 | 148 |
gh_patches_debug_5122 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3044 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[dev/stage] auto-fill-in overwrites my bplan-name
**URL:** https://meinberlin-stage.liqd.net/dashboard/projects/caro-testing-new-bplan-mail-2/bplan/
**user:** initiator addin bplan
**expected behaviour:** I can use autofill to add my mail-address
**behaviour:** if I do so, the title of bplan is overwritten by my name but as it is far up the form I don't notice it.
**important screensize:**
**device & browser:** mac, chrome
**Comment/Question:** is that even something we can influence?
Screenshot?
<img width="673" alt="Bildschirmfoto 2020-07-10 um 11 02 30" src="https://user-images.githubusercontent.com/35491681/87137579-6b0eaf80-c29d-11ea-928f-c888dc8eb430.png">
<img width="673" alt="Bildschirmfoto 2020-07-10 um 11 06 10" src="https://user-images.githubusercontent.com/35491681/87137586-6cd87300-c29d-11ea-965d-74b4ecba8bc8.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/bplan/forms.py`
Content:
```
1 from django import forms
2
3 from meinberlin.apps.extprojects.forms import ExternalProjectCreateForm
4 from meinberlin.apps.extprojects.forms import ExternalProjectForm
5
6 from . import models
7
8
9 class StatementForm(forms.ModelForm):
10 class Meta:
11 model = models.Statement
12 fields = ['name', 'email', 'statement',
13 'street_number', 'postal_code_city']
14
15
16 class BplanProjectCreateForm(ExternalProjectCreateForm):
17
18 class Meta:
19 model = models.Bplan
20 fields = ['name', 'description', 'tile_image', 'tile_image_copyright']
21
22
23 class BplanProjectForm(ExternalProjectForm):
24
25 class Meta:
26 model = models.Bplan
27 fields = ['name', 'identifier', 'url', 'description', 'tile_image',
28 'tile_image_copyright', 'is_archived', 'office_worker_email',
29 'start_date', 'end_date']
30 required_for_project_publish = ['name', 'url', 'description',
31 'office_worker_email']
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py
--- a/meinberlin/apps/bplan/forms.py
+++ b/meinberlin/apps/bplan/forms.py
@@ -29,3 +29,9 @@
'start_date', 'end_date']
required_for_project_publish = ['name', 'url', 'description',
'office_worker_email']
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.fields['name'].widget.attrs.update({
+ 'autocomplete': 'off', 'autofill': 'off'
+ })
| {"golden_diff": "diff --git a/meinberlin/apps/bplan/forms.py b/meinberlin/apps/bplan/forms.py\n--- a/meinberlin/apps/bplan/forms.py\n+++ b/meinberlin/apps/bplan/forms.py\n@@ -29,3 +29,9 @@\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.fields['name'].widget.attrs.update({\n+ 'autocomplete': 'off', 'autofill': 'off'\n+ })\n", "issue": "[dev/stage] auto-fill-in overwrites my bplan-name\n**URL:** https://meinberlin-stage.liqd.net/dashboard/projects/caro-testing-new-bplan-mail-2/bplan/\r\n**user:** initiator addin bplan\r\n**expected behaviour:** I can use autofill to add my mail-address\r\n**behaviour:** if I do so, the title of bplan is overwritten by my name but as it is far up the form I don't notice it.\r\n**important screensize:**\r\n**device & browser:** mac, chrome\r\n**Comment/Question:** is that even something we can influence?\r\n\r\nScreenshot?\r\n<img width=\"673\" alt=\"Bildschirmfoto 2020-07-10 um 11 02 30\" src=\"https://user-images.githubusercontent.com/35491681/87137579-6b0eaf80-c29d-11ea-928f-c888dc8eb430.png\">\r\n<img width=\"673\" alt=\"Bildschirmfoto 2020-07-10 um 11 06 10\" src=\"https://user-images.githubusercontent.com/35491681/87137586-6cd87300-c29d-11ea-965d-74b4ecba8bc8.png\">\r\n\r\n\n", "before_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n", "path": "meinberlin/apps/bplan/forms.py"}], "after_files": [{"content": "from django import forms\n\nfrom meinberlin.apps.extprojects.forms import ExternalProjectCreateForm\nfrom meinberlin.apps.extprojects.forms import ExternalProjectForm\n\nfrom . import models\n\n\nclass StatementForm(forms.ModelForm):\n class Meta:\n model = models.Statement\n fields = ['name', 'email', 'statement',\n 'street_number', 'postal_code_city']\n\n\nclass BplanProjectCreateForm(ExternalProjectCreateForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'description', 'tile_image', 'tile_image_copyright']\n\n\nclass BplanProjectForm(ExternalProjectForm):\n\n class Meta:\n model = models.Bplan\n fields = ['name', 'identifier', 'url', 'description', 'tile_image',\n 'tile_image_copyright', 'is_archived', 'office_worker_email',\n 'start_date', 'end_date']\n required_for_project_publish = ['name', 'url', 'description',\n 'office_worker_email']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['name'].widget.attrs.update({\n 'autocomplete': 'off', 'autofill': 'off'\n })\n", "path": "meinberlin/apps/bplan/forms.py"}]} | 854 | 146 |
gh_patches_debug_5041 | rasdani/github-patches | git_diff | dask__dask-256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dot_graph does not work in stable version
I try to generate visual graphs as [described in documentation](http://dask.pydata.org/en/latest/inspect.html), but get:
`'module' object has no attribute 'to_pydot'`
The graphviz is installed with homebrew. Dask is installed from conda (latest stable release):
```
In [15]: dask.__version__
Out[15]: '0.5.0'
```
The code and traceback are below (I had to replace `blockshape` with `chunks`, otherwise it did not create task graph):
``` python
In [1]:
import dask.array as da
from dask.dot import dot_graph
In [2]:
x = da.ones((5, 15), chunks=(5, 5))
In [5]:
d = (x + 1).dask
In [6]:
dot_graph(d)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-c797e633866d> in <module>()
----> 1 dot_graph(d)
/Users/koldunov/miniconda/lib/python2.7/site-packages/dask/dot.pyc in dot_graph(d, filename, **kwargs)
73 def dot_graph(d, filename='mydask', **kwargs):
74 dg = to_networkx(d, **kwargs)
---> 75 write_networkx_to_dot(dg, filename=filename)
76
77
/Users/koldunov/miniconda/lib/python2.7/site-packages/dask/dot.pyc in write_networkx_to_dot(dg, filename)
61 def write_networkx_to_dot(dg, filename='mydask'):
62 import os
---> 63 p = nx.to_pydot(dg)
64 p.set_rankdir('BT')
65 with open(filename + '.dot', 'w') as f:
AttributeError: 'module' object has no attribute 'to_pydot'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/dot.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import networkx as nx
4 from dask.core import istask, get_dependencies
5
6
7 def make_hashable(x):
8 try:
9 hash(x)
10 return x
11 except TypeError:
12 return hash(str(x))
13
14
15 def lower(func):
16 while hasattr(func, 'func'):
17 func = func.func
18 return func
19
20 def name(func):
21 try:
22 return lower(func).__name__
23 except AttributeError:
24 return 'func'
25
26
27 def to_networkx(d, data_attributes=None, function_attributes=None):
28 if data_attributes is None:
29 data_attributes = dict()
30 if function_attributes is None:
31 function_attributes = dict()
32
33 g = nx.DiGraph()
34
35 for k, v in sorted(d.items(), key=lambda x: x[0]):
36 g.add_node(k, shape='box', **data_attributes.get(k, dict()))
37 if istask(v):
38 func, args = v[0], v[1:]
39 func_node = make_hashable((v, 'function'))
40 g.add_node(func_node,
41 shape='circle',
42 label=name(func),
43 **function_attributes.get(k, dict()))
44 g.add_edge(func_node, k)
45 for dep in sorted(get_dependencies(d, k)):
46 arg2 = make_hashable(dep)
47 g.add_node(arg2,
48 label=str(dep),
49 shape='box',
50 **data_attributes.get(dep, dict()))
51 g.add_edge(arg2, func_node)
52 else:
53 if v not in d:
54 g.add_node(k, label='%s=%s' % (k, v), **data_attributes.get(k, dict()))
55 else: # alias situation
56 g.add_edge(v, k)
57
58 return g
59
60
61 def write_networkx_to_dot(dg, filename='mydask'):
62 import os
63 p = nx.to_pydot(dg)
64 p.set_rankdir('BT')
65 with open(filename + '.dot', 'w') as f:
66 f.write(p.to_string())
67
68 os.system('dot -Tpdf %s.dot -o %s.pdf' % (filename, filename))
69 os.system('dot -Tpng %s.dot -o %s.png' % (filename, filename))
70 print("Writing graph to %s.pdf" % filename)
71
72
73 def dot_graph(d, filename='mydask', **kwargs):
74 dg = to_networkx(d, **kwargs)
75 write_networkx_to_dot(dg, filename=filename)
76
77
78 if __name__ == '__main__':
79 def add(x, y):
80 return x + y
81 def inc(x):
82 return x + 1
83
84 dsk = {'x': 1, 'y': (inc, 'x'),
85 'a': 2, 'b': (inc, 'a'),
86 'z': (add, 'y', 'b')}
87
88 dot_graph(dsk)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dask/dot.py b/dask/dot.py
--- a/dask/dot.py
+++ b/dask/dot.py
@@ -60,7 +60,11 @@
def write_networkx_to_dot(dg, filename='mydask'):
import os
- p = nx.to_pydot(dg)
+ try:
+ p = nx.to_pydot(dg)
+ except AttributeError:
+ raise ImportError("Can not find pydot module. Please install.\n"
+ " pip install pydot")
p.set_rankdir('BT')
with open(filename + '.dot', 'w') as f:
f.write(p.to_string())
| {"golden_diff": "diff --git a/dask/dot.py b/dask/dot.py\n--- a/dask/dot.py\n+++ b/dask/dot.py\n@@ -60,7 +60,11 @@\n \n def write_networkx_to_dot(dg, filename='mydask'):\n import os\n- p = nx.to_pydot(dg)\n+ try:\n+ p = nx.to_pydot(dg)\n+ except AttributeError:\n+ raise ImportError(\"Can not find pydot module. Please install.\\n\"\n+ \" pip install pydot\")\n p.set_rankdir('BT')\n with open(filename + '.dot', 'w') as f:\n f.write(p.to_string())\n", "issue": "dot_graph does not work in stable version\nI try to generate visual graphs as [described in documentation](http://dask.pydata.org/en/latest/inspect.html), but get:\n`'module' object has no attribute 'to_pydot'`\n\nThe graphviz is installed with homebrew. Dask is installed from conda (latest stable release):\n\n```\nIn [15]: dask.__version__\n\nOut[15]: '0.5.0'\n```\n\nThe code and traceback are below (I had to replace `blockshape` with `chunks`, otherwise it did not create task graph):\n\n``` python\nIn [1]: \nimport dask.array as da\nfrom dask.dot import dot_graph\nIn [2]:\n\nx = da.ones((5, 15), chunks=(5, 5))\nIn [5]:\n\nd = (x + 1).dask\nIn [6]:\n\ndot_graph(d)\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-6-c797e633866d> in <module>()\n----> 1 dot_graph(d)\n\n/Users/koldunov/miniconda/lib/python2.7/site-packages/dask/dot.pyc in dot_graph(d, filename, **kwargs)\n 73 def dot_graph(d, filename='mydask', **kwargs):\n 74 dg = to_networkx(d, **kwargs)\n---> 75 write_networkx_to_dot(dg, filename=filename)\n 76 \n 77 \n\n/Users/koldunov/miniconda/lib/python2.7/site-packages/dask/dot.pyc in write_networkx_to_dot(dg, filename)\n 61 def write_networkx_to_dot(dg, filename='mydask'):\n 62 import os\n---> 63 p = nx.to_pydot(dg)\n 64 p.set_rankdir('BT')\n 65 with open(filename + '.dot', 'w') as f:\n\nAttributeError: 'module' object has no attribute 'to_pydot'\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport networkx as nx\nfrom dask.core import istask, get_dependencies\n\n\ndef make_hashable(x):\n try:\n hash(x)\n return x\n except TypeError:\n return hash(str(x))\n\n\ndef lower(func):\n while hasattr(func, 'func'):\n func = func.func\n return func\n\ndef name(func):\n try:\n return lower(func).__name__\n except AttributeError:\n return 'func'\n\n\ndef to_networkx(d, data_attributes=None, function_attributes=None):\n if data_attributes is None:\n data_attributes = dict()\n if function_attributes is None:\n function_attributes = dict()\n\n g = nx.DiGraph()\n\n for k, v in sorted(d.items(), key=lambda x: x[0]):\n g.add_node(k, shape='box', **data_attributes.get(k, dict()))\n if istask(v):\n func, args = v[0], v[1:]\n func_node = make_hashable((v, 'function'))\n g.add_node(func_node,\n shape='circle',\n label=name(func),\n **function_attributes.get(k, dict()))\n g.add_edge(func_node, k)\n for dep in sorted(get_dependencies(d, k)):\n arg2 = make_hashable(dep)\n g.add_node(arg2,\n label=str(dep),\n shape='box',\n **data_attributes.get(dep, dict()))\n g.add_edge(arg2, func_node)\n else:\n if v not in d:\n g.add_node(k, label='%s=%s' % (k, v), **data_attributes.get(k, dict()))\n else: # alias situation\n g.add_edge(v, k)\n\n return g\n\n\ndef write_networkx_to_dot(dg, filename='mydask'):\n import os\n p = nx.to_pydot(dg)\n p.set_rankdir('BT')\n with open(filename + '.dot', 'w') as f:\n f.write(p.to_string())\n\n os.system('dot -Tpdf %s.dot -o %s.pdf' % (filename, filename))\n os.system('dot -Tpng %s.dot -o %s.png' % (filename, filename))\n print(\"Writing graph to %s.pdf\" % filename)\n\n\ndef dot_graph(d, filename='mydask', **kwargs):\n dg = to_networkx(d, **kwargs)\n write_networkx_to_dot(dg, filename=filename)\n\n\nif __name__ == '__main__':\n def add(x, y):\n return x + y\n def inc(x):\n return x + 1\n\n dsk = {'x': 1, 'y': (inc, 'x'),\n 'a': 2, 'b': (inc, 'a'),\n 'z': (add, 'y', 'b')}\n\n dot_graph(dsk)\n", "path": "dask/dot.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport networkx as nx\nfrom dask.core import istask, get_dependencies\n\n\ndef make_hashable(x):\n try:\n hash(x)\n return x\n except TypeError:\n return hash(str(x))\n\n\ndef lower(func):\n while hasattr(func, 'func'):\n func = func.func\n return func\n\ndef name(func):\n try:\n return lower(func).__name__\n except AttributeError:\n return 'func'\n\n\ndef to_networkx(d, data_attributes=None, function_attributes=None):\n if data_attributes is None:\n data_attributes = dict()\n if function_attributes is None:\n function_attributes = dict()\n\n g = nx.DiGraph()\n\n for k, v in sorted(d.items(), key=lambda x: x[0]):\n g.add_node(k, shape='box', **data_attributes.get(k, dict()))\n if istask(v):\n func, args = v[0], v[1:]\n func_node = make_hashable((v, 'function'))\n g.add_node(func_node,\n shape='circle',\n label=name(func),\n **function_attributes.get(k, dict()))\n g.add_edge(func_node, k)\n for dep in sorted(get_dependencies(d, k)):\n arg2 = make_hashable(dep)\n g.add_node(arg2,\n label=str(dep),\n shape='box',\n **data_attributes.get(dep, dict()))\n g.add_edge(arg2, func_node)\n else:\n if v not in d:\n g.add_node(k, label='%s=%s' % (k, v), **data_attributes.get(k, dict()))\n else: # alias situation\n g.add_edge(v, k)\n\n return g\n\n\ndef write_networkx_to_dot(dg, filename='mydask'):\n import os\n try:\n p = nx.to_pydot(dg)\n except AttributeError:\n raise ImportError(\"Can not find pydot module. Please install.\\n\"\n \" pip install pydot\")\n p.set_rankdir('BT')\n with open(filename + '.dot', 'w') as f:\n f.write(p.to_string())\n\n os.system('dot -Tpdf %s.dot -o %s.pdf' % (filename, filename))\n os.system('dot -Tpng %s.dot -o %s.png' % (filename, filename))\n print(\"Writing graph to %s.pdf\" % filename)\n\n\ndef dot_graph(d, filename='mydask', **kwargs):\n dg = to_networkx(d, **kwargs)\n write_networkx_to_dot(dg, filename=filename)\n\n\nif __name__ == '__main__':\n def add(x, y):\n return x + y\n def inc(x):\n return x + 1\n\n dsk = {'x': 1, 'y': (inc, 'x'),\n 'a': 2, 'b': (inc, 'a'),\n 'z': (add, 'y', 'b')}\n\n dot_graph(dsk)\n", "path": "dask/dot.py"}]} | 1,505 | 151 |
gh_patches_debug_18251 | rasdani/github-patches | git_diff | qtile__qtile-2924 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs are failing to build (again)
See: https://readthedocs.org/projects/qtile/builds/15011707/
Looks like this is a dependency issue related to pywlroots.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Qtile documentation build configuration file, created by
4 # sphinx-quickstart on Sat Feb 11 15:20:21 2012.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import os
15 import setuptools_scm
16 import sys
17 from unittest.mock import MagicMock
18
19
20 class Mock(MagicMock):
21 # xcbq does a dir() on objects and pull stuff out of them and tries to sort
22 # the result. MagicMock has a bunch of stuff that can't be sorted, so let's
23 # like about dir().
24 def __dir__(self):
25 return []
26
27 MOCK_MODULES = [
28 'libqtile._ffi_pango',
29 'libqtile.backend.x11._ffi_xcursors',
30 'libqtile.widget._pulse_audio',
31 'cairocffi',
32 'cairocffi.xcb',
33 'cairocffi.pixbuf',
34 'cffi',
35 'dateutil',
36 'dateutil.parser',
37 'dbus_next',
38 'dbus_next.aio',
39 'dbus_next.errors',
40 'dbus_next.service',
41 'dbus_next.constants',
42 'iwlib',
43 'keyring',
44 'mpd',
45 'psutil',
46 'trollius',
47 'xcffib',
48 'xcffib.randr',
49 'xcffib.render',
50 'xcffib.wrappers',
51 'xcffib.xfixes',
52 'xcffib.xinerama',
53 'xcffib.xproto',
54 'xdg.IconTheme',
55 ]
56 sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
57
58 # If extensions (or modules to document with autodoc) are in another directory,
59 # add these directories to sys.path here. If the directory is relative to the
60 # documentation root, use os.path.abspath to make it absolute, like shown here.
61 sys.path.insert(0, os.path.abspath('.'))
62 sys.path.insert(0, os.path.abspath('../'))
63
64 # -- General configuration -----------------------------------------------------
65
66 # If your documentation needs a minimal Sphinx version, state it here.
67 #needs_sphinx = '1.0'
68
69 # Add any Sphinx extension module names here, as strings. They can be extensions
70 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
71 extensions = [
72 'sphinx.ext.autodoc',
73 'sphinx.ext.autosummary',
74 'sphinx.ext.coverage',
75 'sphinx.ext.graphviz',
76 'sphinx.ext.todo',
77 'sphinx.ext.viewcode',
78 'sphinxcontrib.seqdiag',
79 'sphinx_qtile',
80 'numpydoc',
81 ]
82
83 numpydoc_show_class_members = False
84
85 # Add any paths that contain templates here, relative to this directory.
86 templates_path = []
87
88 # The suffix of source filenames.
89 source_suffix = '.rst'
90
91 # The encoding of source files.
92 #source_encoding = 'utf-8-sig'
93
94 # The master toctree document.
95 master_doc = 'index'
96
97 # General information about the project.
98 project = u'Qtile'
99 copyright = u'2008-2021, Aldo Cortesi and contributers'
100
101 # The version info for the project you're documenting, acts as replacement for
102 # |version| and |release|, also used in various other places throughout the
103 # built documents.
104 #
105 # The short X.Y version.
106 version = setuptools_scm.get_version(root="..")
107 # The full version, including alpha/beta/rc tags.
108 release = version
109
110 # The language for content autogenerated by Sphinx. Refer to documentation
111 # for a list of supported languages.
112 #language = None
113
114 # There are two options for replacing |today|: either, you set today to some
115 # non-false value, then it is used:
116 #today = ''
117 # Else, today_fmt is used as the format for a strftime call.
118 #today_fmt = '%B %d, %Y'
119
120 # List of patterns, relative to source directory, that match files and
121 # directories to ignore when looking for source files.
122 exclude_patterns = ['_build']
123
124 # The reST default role (used for this markup: `text`) to use for all documents.
125 #default_role = None
126
127 # If true, '()' will be appended to :func: etc. cross-reference text.
128 #add_function_parentheses = True
129
130 # If true, the current module name will be prepended to all description
131 # unit titles (such as .. function::).
132 #add_module_names = True
133
134 # If true, sectionauthor and moduleauthor directives will be shown in the
135 # output. They are ignored by default.
136 #show_authors = False
137
138 # The name of the Pygments (syntax highlighting) style to use.
139 pygments_style = 'sphinx'
140
141 # A list of ignored prefixes for module index sorting.
142 #modindex_common_prefix = []
143
144 # If true, `todo` and `todoList` produce output, else they produce nothing.
145 todo_include_todos = True
146
147
148 # -- Options for HTML output --------fautod-------------------------------------------
149
150 # The theme to use for HTML and HTML Help pages. See the documentation for
151 # a list of builtin themes.
152 #html_theme = 'default'
153
154 # Theme options are theme-specific and customize the look and feel of a theme
155 # further. For a list of options available for each theme, see the
156 # documentation.
157 #html_theme_options = {}
158
159 # Add any paths that contain custom themes here, relative to this directory.
160 #html_theme_path = []
161
162 # The name for this set of Sphinx documents. If None, it defaults to
163 # "<project> v<release> documentation".
164 #html_title = None
165
166 # A shorter title for the navigation bar. Default is the same as html_title.
167 #html_short_title = None
168
169 # The name of an image file (relative to this directory) to place at the top
170 # of the sidebar.
171 #html_logo = None
172
173 # The name of an image file (within the static path) to use as favicon of the
174 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
175 # pixels large.
176 html_favicon = '_static/favicon.ico'
177
178 # Add any paths that contain custom static files (such as style sheets) here,
179 # relative to this directory. They are copied after the builtin static files,
180 # so a file named "default.css" will overwrite the builtin "default.css".
181 html_static_path = ['_static']
182
183 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
184 # using the given strftime format.
185 #html_last_updated_fmt = '%b %d, %Y'
186
187 # If true, SmartyPants will be used to convert quotes and dashes to
188 # typographically correct entities.
189 #html_use_smartypants = True
190
191 # Custom sidebar templates, maps document names to template names.
192 #html_sidebars = {}
193
194 # Additional templates that should be rendered to pages, maps page names to
195 # template names.
196 #html_additional_pages = {'index': 'index.html'}
197
198 # If false, no module index is generated.
199 #html_domain_indices = True
200
201 # If false, no index is generated.
202 html_use_index = True
203
204 # If true, the index is split into individual pages for each letter.
205 #html_split_index = False
206
207 # If true, links to the reST sources are added to the pages.
208 #html_show_sourcelink = True
209
210 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
211 #html_show_sphinx = True
212
213 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
214 #html_show_copyright = True
215
216 # If true, an OpenSearch description file will be output, and all pages will
217 # contain a <link> tag referring to it. The value of this option must be the
218 # base URL from which the finished HTML is served.
219 #html_use_opensearch = ''
220
221 # This is the file name suffix for HTML files (e.g. ".xhtml").
222 #html_file_suffix = None
223
224 # Output file base name for HTML help builder.
225 htmlhelp_basename = 'Qtiledoc'
226
227
228 # -- Options for LaTeX output --------------------------------------------------
229
230 latex_elements = {
231 # The paper size ('letterpaper' or 'a4paper').
232 #'papersize': 'letterpaper',
233
234 # The font size ('10pt', '11pt' or '12pt').
235 #'pointsize': '10pt',
236
237 # Additional stuff for the LaTeX preamble.
238 #'preamble': '',
239 }
240
241 # Grouping the document tree into LaTeX files. List of tuples
242 # (source start file, target name, title, author, documentclass [howto/manual]).
243 latex_documents = [
244 ('index', 'Qtile.tex', u'Qtile Documentation',
245 u'Aldo Cortesi', 'manual'),
246 ]
247
248 # The name of an image file (relative to this directory) to place at the top of
249 # the title page.
250 #latex_logo = None
251
252 # For "manual" documents, if this is true, then toplevel headings are parts,
253 # not chapters.
254 #latex_use_parts = False
255
256 # If true, show page references after internal links.
257 #latex_show_pagerefs = False
258
259 # If true, show URL addresses after external links.
260 #latex_show_urls = False
261
262 # Documents to append as an appendix to all manuals.
263 #latex_appendices = []
264
265 # If false, no module index is generated.
266 #latex_domain_indices = True
267
268
269 # -- Options for manual page output --------------------------------------------
270
271 # One entry per manual page. List of tuples
272 # (source start file, name, description, authors, manual section).
273 #man_pages = []
274
275 # If true, show URL addresses after external links.
276 #man_show_urls = False
277
278
279 # -- Options for Texinfo output ------------------------------------------------
280
281 # Grouping the document tree into Texinfo files. List of tuples
282 # (source start file, target name, title, author,
283 # dir menu entry, description, category)
284 texinfo_documents = [
285 ('index', 'Qtile', u'Qtile Documentation',
286 u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',
287 'Miscellaneous'),
288 ]
289
290 # Documents to append as an appendix to all manuals.
291 #texinfo_appendices = []
292
293 # If false, no module index is generated.
294 #texinfo_domain_indices = True
295
296 # How to display URL addresses: 'footnote', 'no', or 'inline'.
297 #texinfo_show_urls = 'footnote'
298
299 # only import and set the theme if we're building docs locally
300 if not os.environ.get('READTHEDOCS'):
301 import sphinx_rtd_theme
302 html_theme = 'sphinx_rtd_theme'
303 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
304
305
306 graphviz_dot_args = ['-Lg']
307
308 # A workaround for the responsive tables always having annoying scrollbars.
309 def setup(app):
310 app.add_css_file("no_scrollbars.css")
311
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -43,7 +43,25 @@
'keyring',
'mpd',
'psutil',
- 'trollius',
+ 'pywayland',
+ 'pywayland.protocol.wayland',
+ 'pywayland.server',
+ 'wlroots',
+ 'wlroots.helper',
+ 'wlroots.util',
+ 'wlroots.util.box',
+ 'wlroots.util.clock',
+ 'wlroots.util.edges',
+ 'wlroots.util.region',
+ 'wlroots.wlr_types',
+ 'wlroots.wlr_types.cursor',
+ 'wlroots.wlr_types.keyboard',
+ 'wlroots.wlr_types.layer_shell_v1',
+ 'wlroots.wlr_types.output_management_v1',
+ 'wlroots.wlr_types.pointer_constraints_v1',
+ 'wlroots.wlr_types.server_decoration',
+ 'wlroots.wlr_types.virtual_keyboard_v1',
+ 'wlroots.wlr_types.xdg_shell',
'xcffib',
'xcffib.randr',
'xcffib.render',
@@ -52,6 +70,7 @@
'xcffib.xinerama',
'xcffib.xproto',
'xdg.IconTheme',
+ 'xkbcommon'
]
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -43,7 +43,25 @@\n 'keyring',\n 'mpd',\n 'psutil',\n- 'trollius',\n+ 'pywayland',\n+ 'pywayland.protocol.wayland',\n+ 'pywayland.server',\n+ 'wlroots',\n+ 'wlroots.helper',\n+ 'wlroots.util',\n+ 'wlroots.util.box',\n+ 'wlroots.util.clock',\n+ 'wlroots.util.edges',\n+ 'wlroots.util.region',\n+ 'wlroots.wlr_types',\n+ 'wlroots.wlr_types.cursor',\n+ 'wlroots.wlr_types.keyboard',\n+ 'wlroots.wlr_types.layer_shell_v1',\n+ 'wlroots.wlr_types.output_management_v1',\n+ 'wlroots.wlr_types.pointer_constraints_v1',\n+ 'wlroots.wlr_types.server_decoration',\n+ 'wlroots.wlr_types.virtual_keyboard_v1',\n+ 'wlroots.wlr_types.xdg_shell',\n 'xcffib',\n 'xcffib.randr',\n 'xcffib.render',\n@@ -52,6 +70,7 @@\n 'xcffib.xinerama',\n 'xcffib.xproto',\n 'xdg.IconTheme',\n+ 'xkbcommon'\n ]\n sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n", "issue": "Docs are failing to build (again)\nSee: https://readthedocs.org/projects/qtile/builds/15011707/\r\n\r\nLooks like this is a dependency issue related to pywlroots.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Qtile documentation build configuration file, created by\n# sphinx-quickstart on Sat Feb 11 15:20:21 2012.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport setuptools_scm\nimport sys\nfrom unittest.mock import MagicMock\n\n\nclass Mock(MagicMock):\n # xcbq does a dir() on objects and pull stuff out of them and tries to sort\n # the result. MagicMock has a bunch of stuff that can't be sorted, so let's\n # like about dir().\n def __dir__(self):\n return []\n\nMOCK_MODULES = [\n 'libqtile._ffi_pango',\n 'libqtile.backend.x11._ffi_xcursors',\n 'libqtile.widget._pulse_audio',\n 'cairocffi',\n 'cairocffi.xcb',\n 'cairocffi.pixbuf',\n 'cffi',\n 'dateutil',\n 'dateutil.parser',\n 'dbus_next',\n 'dbus_next.aio',\n 'dbus_next.errors',\n 'dbus_next.service',\n 'dbus_next.constants',\n 'iwlib',\n 'keyring',\n 'mpd',\n 'psutil',\n 'trollius',\n 'xcffib',\n 'xcffib.randr',\n 'xcffib.render',\n 'xcffib.wrappers',\n 'xcffib.xfixes',\n 'xcffib.xinerama',\n 'xcffib.xproto',\n 'xdg.IconTheme',\n]\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, os.path.abspath('../'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.seqdiag',\n 'sphinx_qtile',\n 'numpydoc',\n]\n\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Qtile'\ncopyright = u'2008-2021, Aldo Cortesi and contributers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = setuptools_scm.get_version(root=\"..\")\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output --------fautod-------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {'index': 'index.html'}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Qtiledoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'Qtile.tex', u'Qtile Documentation',\n u'Aldo Cortesi', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\n#man_pages = []\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Qtile', u'Qtile Documentation',\n u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# only import and set the theme if we're building docs locally\nif not os.environ.get('READTHEDOCS'):\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\ngraphviz_dot_args = ['-Lg']\n\n# A workaround for the responsive tables always having annoying scrollbars.\ndef setup(app):\n app.add_css_file(\"no_scrollbars.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Qtile documentation build configuration file, created by\n# sphinx-quickstart on Sat Feb 11 15:20:21 2012.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\nimport setuptools_scm\nimport sys\nfrom unittest.mock import MagicMock\n\n\nclass Mock(MagicMock):\n # xcbq does a dir() on objects and pull stuff out of them and tries to sort\n # the result. MagicMock has a bunch of stuff that can't be sorted, so let's\n # like about dir().\n def __dir__(self):\n return []\n\nMOCK_MODULES = [\n 'libqtile._ffi_pango',\n 'libqtile.backend.x11._ffi_xcursors',\n 'libqtile.widget._pulse_audio',\n 'cairocffi',\n 'cairocffi.xcb',\n 'cairocffi.pixbuf',\n 'cffi',\n 'dateutil',\n 'dateutil.parser',\n 'dbus_next',\n 'dbus_next.aio',\n 'dbus_next.errors',\n 'dbus_next.service',\n 'dbus_next.constants',\n 'iwlib',\n 'keyring',\n 'mpd',\n 'psutil',\n 'pywayland',\n 'pywayland.protocol.wayland',\n 'pywayland.server',\n 'wlroots',\n 'wlroots.helper',\n 'wlroots.util',\n 'wlroots.util.box',\n 'wlroots.util.clock',\n 'wlroots.util.edges',\n 'wlroots.util.region',\n 'wlroots.wlr_types',\n 'wlroots.wlr_types.cursor',\n 'wlroots.wlr_types.keyboard',\n 'wlroots.wlr_types.layer_shell_v1',\n 'wlroots.wlr_types.output_management_v1',\n 'wlroots.wlr_types.pointer_constraints_v1',\n 'wlroots.wlr_types.server_decoration',\n 'wlroots.wlr_types.virtual_keyboard_v1',\n 'wlroots.wlr_types.xdg_shell',\n 'xcffib',\n 'xcffib.randr',\n 'xcffib.render',\n 'xcffib.wrappers',\n 'xcffib.xfixes',\n 'xcffib.xinerama',\n 'xcffib.xproto',\n 'xdg.IconTheme',\n 'xkbcommon'\n]\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, os.path.abspath('../'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinxcontrib.seqdiag',\n 'sphinx_qtile',\n 'numpydoc',\n]\n\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Qtile'\ncopyright = u'2008-2021, Aldo Cortesi and contributers'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = setuptools_scm.get_version(root=\"..\")\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output --------fautod-------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#html_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {'index': 'index.html'}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\nhtml_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Qtiledoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'Qtile.tex', u'Qtile Documentation',\n u'Aldo Cortesi', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\n#man_pages = []\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Qtile', u'Qtile Documentation',\n u'Aldo Cortesi', 'Qtile', 'A hackable tiling window manager.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# only import and set the theme if we're building docs locally\nif not os.environ.get('READTHEDOCS'):\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\ngraphviz_dot_args = ['-Lg']\n\n# A workaround for the responsive tables always having annoying scrollbars.\ndef setup(app):\n app.add_css_file(\"no_scrollbars.css\")\n", "path": "docs/conf.py"}]} | 3,562 | 322 |
gh_patches_debug_20616 | rasdani/github-patches | git_diff | rasterio__rasterio-1259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
examples/total.py won't run in Python3
The line `total /= 3` should read instead, `total = total / 3`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/sieve.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # sieve: demonstrate sieving and polygonizing of raster features.
4
5 import subprocess
6
7 import numpy as np
8 import rasterio
9 from rasterio.features import sieve, shapes
10
11
12 # Register GDAL and OGR drivers.
13 with rasterio.Env():
14
15 # Read a raster to be sieved.
16 with rasterio.open('tests/data/shade.tif') as src:
17 shade = src.read(1)
18
19 # Print the number of shapes in the source raster.
20 print("Slope shapes: %d" % len(list(shapes(shade))))
21
22 # Sieve out features 13 pixels or smaller.
23 sieved = sieve(shade, 13, out=np.zeros(src.shape, src.dtypes[0]))
24
25 # Print the number of shapes in the sieved raster.
26 print("Sieved (13) shapes: %d" % len(list(shapes(sieved))))
27
28 # Write out the sieved raster.
29 kwargs = src.meta
30 kwargs['transform'] = kwargs.pop('affine')
31 with rasterio.open('example-sieved.tif', 'w', **kwargs) as dst:
32 dst.write(sieved, indexes=1)
33
34 # Dump out gdalinfo's report card and open (or "eog") the TIFF.
35 print(subprocess.check_output(
36 ['gdalinfo', '-stats', 'example-sieved.tif']))
37 subprocess.call(['open', 'example-sieved.tif'])
38
```
Path: `examples/total.py`
Content:
```
1 import numpy as np
2 import rasterio
3 import subprocess
4
5 with rasterio.Env(CPL_DEBUG=True):
6
7 # Read raster bands directly to Numpy arrays.
8 with rasterio.open('tests/data/RGB.byte.tif') as src:
9 r, g, b = src.read()
10
11 # Combine arrays using the 'iadd' ufunc. Expecting that the sum will
12 # exceed the 8-bit integer range, initialize it as 16-bit. Adding other
13 # arrays to it in-place converts those arrays up and preserves the type
14 # of the total array.
15 total = np.zeros(r.shape, dtype=rasterio.uint16)
16 for band in (r, g, b):
17 total += band
18 total /= 3
19
20 # Write the product as a raster band to a new 8-bit file. For keyword
21 # arguments, we start with the meta attributes of the source file, but
22 # then change the band count to 1, set the dtype to uint8, and specify
23 # LZW compression.
24 kwargs = src.meta
25 kwargs.update(
26 dtype=rasterio.uint8,
27 count=1,
28 compress='lzw')
29
30 with rasterio.open('example-total.tif', 'w', **kwargs) as dst:
31 dst.write(total.astype(rasterio.uint8), indexes=1)
32
33 # Dump out gdalinfo's report card and open the image.
34 info = subprocess.check_output(
35 ['gdalinfo', '-stats', 'example-total.tif'])
36 print(info)
37 subprocess.call(['open', 'example-total.tif'])
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/sieve.py b/examples/sieve.py
--- a/examples/sieve.py
+++ b/examples/sieve.py
@@ -27,7 +27,7 @@
# Write out the sieved raster.
kwargs = src.meta
- kwargs['transform'] = kwargs.pop('affine')
+ kwargs['transform'] = rasterio.transform.guard_transform(kwargs['transform'])
with rasterio.open('example-sieved.tif', 'w', **kwargs) as dst:
dst.write(sieved, indexes=1)
diff --git a/examples/total.py b/examples/total.py
--- a/examples/total.py
+++ b/examples/total.py
@@ -1,3 +1,4 @@
+from __future__ import division
import numpy as np
import rasterio
import subprocess
@@ -15,7 +16,7 @@
total = np.zeros(r.shape, dtype=rasterio.uint16)
for band in (r, g, b):
total += band
- total /= 3
+ total = total // 3
# Write the product as a raster band to a new 8-bit file. For keyword
# arguments, we start with the meta attributes of the source file, but
| {"golden_diff": "diff --git a/examples/sieve.py b/examples/sieve.py\n--- a/examples/sieve.py\n+++ b/examples/sieve.py\n@@ -27,7 +27,7 @@\n \n # Write out the sieved raster.\n kwargs = src.meta\n- kwargs['transform'] = kwargs.pop('affine')\n+ kwargs['transform'] = rasterio.transform.guard_transform(kwargs['transform'])\n with rasterio.open('example-sieved.tif', 'w', **kwargs) as dst:\n dst.write(sieved, indexes=1)\n \ndiff --git a/examples/total.py b/examples/total.py\n--- a/examples/total.py\n+++ b/examples/total.py\n@@ -1,3 +1,4 @@\n+from __future__ import division\n import numpy as np\n import rasterio\n import subprocess\n@@ -15,7 +16,7 @@\n total = np.zeros(r.shape, dtype=rasterio.uint16)\n for band in (r, g, b):\n total += band\n- total /= 3\n+ total = total // 3\n \n # Write the product as a raster band to a new 8-bit file. For keyword\n # arguments, we start with the meta attributes of the source file, but\n", "issue": "examples/total.py won't run in Python3\nThe line `total /= 3` should read instead, `total = total / 3`.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# sieve: demonstrate sieving and polygonizing of raster features.\n\nimport subprocess\n\nimport numpy as np\nimport rasterio\nfrom rasterio.features import sieve, shapes\n\n\n# Register GDAL and OGR drivers.\nwith rasterio.Env():\n\n # Read a raster to be sieved.\n with rasterio.open('tests/data/shade.tif') as src:\n shade = src.read(1)\n\n # Print the number of shapes in the source raster.\n print(\"Slope shapes: %d\" % len(list(shapes(shade))))\n\n # Sieve out features 13 pixels or smaller.\n sieved = sieve(shade, 13, out=np.zeros(src.shape, src.dtypes[0]))\n\n # Print the number of shapes in the sieved raster.\n print(\"Sieved (13) shapes: %d\" % len(list(shapes(sieved))))\n\n # Write out the sieved raster.\n kwargs = src.meta\n kwargs['transform'] = kwargs.pop('affine')\n with rasterio.open('example-sieved.tif', 'w', **kwargs) as dst:\n dst.write(sieved, indexes=1)\n\n# Dump out gdalinfo's report card and open (or \"eog\") the TIFF.\nprint(subprocess.check_output(\n ['gdalinfo', '-stats', 'example-sieved.tif']))\nsubprocess.call(['open', 'example-sieved.tif'])\n", "path": "examples/sieve.py"}, {"content": "import numpy as np\nimport rasterio\nimport subprocess\n\nwith rasterio.Env(CPL_DEBUG=True):\n\n # Read raster bands directly to Numpy arrays.\n with rasterio.open('tests/data/RGB.byte.tif') as src:\n r, g, b = src.read()\n\n # Combine arrays using the 'iadd' ufunc. Expecting that the sum will\n # exceed the 8-bit integer range, initialize it as 16-bit. Adding other\n # arrays to it in-place converts those arrays up and preserves the type\n # of the total array.\n total = np.zeros(r.shape, dtype=rasterio.uint16)\n for band in (r, g, b):\n total += band\n total /= 3\n\n # Write the product as a raster band to a new 8-bit file. For keyword\n # arguments, we start with the meta attributes of the source file, but\n # then change the band count to 1, set the dtype to uint8, and specify\n # LZW compression.\n kwargs = src.meta\n kwargs.update(\n dtype=rasterio.uint8,\n count=1,\n compress='lzw')\n\n with rasterio.open('example-total.tif', 'w', **kwargs) as dst:\n dst.write(total.astype(rasterio.uint8), indexes=1)\n\n# Dump out gdalinfo's report card and open the image.\ninfo = subprocess.check_output(\n ['gdalinfo', '-stats', 'example-total.tif'])\nprint(info)\nsubprocess.call(['open', 'example-total.tif'])\n", "path": "examples/total.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# sieve: demonstrate sieving and polygonizing of raster features.\n\nimport subprocess\n\nimport numpy as np\nimport rasterio\nfrom rasterio.features import sieve, shapes\n\n\n# Register GDAL and OGR drivers.\nwith rasterio.Env():\n\n # Read a raster to be sieved.\n with rasterio.open('tests/data/shade.tif') as src:\n shade = src.read(1)\n\n # Print the number of shapes in the source raster.\n print(\"Slope shapes: %d\" % len(list(shapes(shade))))\n\n # Sieve out features 13 pixels or smaller.\n sieved = sieve(shade, 13, out=np.zeros(src.shape, src.dtypes[0]))\n\n # Print the number of shapes in the sieved raster.\n print(\"Sieved (13) shapes: %d\" % len(list(shapes(sieved))))\n\n # Write out the sieved raster.\n kwargs = src.meta\n kwargs['transform'] = rasterio.transform.guard_transform(kwargs['transform'])\n with rasterio.open('example-sieved.tif', 'w', **kwargs) as dst:\n dst.write(sieved, indexes=1)\n\n# Dump out gdalinfo's report card and open (or \"eog\") the TIFF.\nprint(subprocess.check_output(\n ['gdalinfo', '-stats', 'example-sieved.tif']))\nsubprocess.call(['open', 'example-sieved.tif'])\n", "path": "examples/sieve.py"}, {"content": "from __future__ import division\nimport numpy as np\nimport rasterio\nimport subprocess\n\nwith rasterio.Env(CPL_DEBUG=True):\n\n # Read raster bands directly to Numpy arrays.\n with rasterio.open('tests/data/RGB.byte.tif') as src:\n r, g, b = src.read()\n\n # Combine arrays using the 'iadd' ufunc. Expecting that the sum will\n # exceed the 8-bit integer range, initialize it as 16-bit. Adding other\n # arrays to it in-place converts those arrays up and preserves the type\n # of the total array.\n total = np.zeros(r.shape, dtype=rasterio.uint16)\n for band in (r, g, b):\n total += band\n total = total // 3\n\n # Write the product as a raster band to a new 8-bit file. For keyword\n # arguments, we start with the meta attributes of the source file, but\n # then change the band count to 1, set the dtype to uint8, and specify\n # LZW compression.\n kwargs = src.meta\n kwargs.update(\n dtype=rasterio.uint8,\n count=1,\n compress='lzw')\n\n with rasterio.open('example-total.tif', 'w', **kwargs) as dst:\n dst.write(total.astype(rasterio.uint8), indexes=1)\n\n# Dump out gdalinfo's report card and open the image.\ninfo = subprocess.check_output(\n ['gdalinfo', '-stats', 'example-total.tif'])\nprint(info)\nsubprocess.call(['open', 'example-total.tif'])\n", "path": "examples/total.py"}]} | 1,090 | 271 |
gh_patches_debug_58650 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-295 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BatchError is unprintable using default constructor (one string)
This one should be pretty simple, I hope.
Here's the constructor signature: `def __init__(self, reason, resp=None, content=None):`, which doesn't require `resp` to be defined, and I can see it is not defined most of the time, for example, in googleapiclient/http.py.
Then, given the representation method:
```
def __repr__(self):
return '<BatchError %s "%s">' % (self.resp.status, self.reason)
```
Which is also the string method:
```
__str__ = __repr__
```
This results in unprintable exceptions where `resp` is undefined, which is not very helpful when attempting to understand the error (e.g. #164).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/errors.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Errors for the library.
16
17 All exceptions defined by the library
18 should be defined in this file.
19 """
20 from __future__ import absolute_import
21
22 __author__ = '[email protected] (Joe Gregorio)'
23
24 import json
25
26 # Oauth2client < 3 has the positional helper in 'util', >= 3 has it
27 # in '_helpers'.
28 try:
29 from oauth2client import util
30 except ImportError:
31 from oauth2client import _helpers as util
32
33
34 class Error(Exception):
35 """Base error for this module."""
36 pass
37
38
39 class HttpError(Error):
40 """HTTP data was invalid or unexpected."""
41
42 @util.positional(3)
43 def __init__(self, resp, content, uri=None):
44 self.resp = resp
45 if not isinstance(content, bytes):
46 raise TypeError("HTTP content should be bytes")
47 self.content = content
48 self.uri = uri
49
50 def _get_reason(self):
51 """Calculate the reason for the error from the response content."""
52 reason = self.resp.reason
53 try:
54 data = json.loads(self.content.decode('utf-8'))
55 if isinstance(data, dict):
56 reason = data['error']['message']
57 elif isinstance(data, list) and len(data) > 0:
58 first_error = data[0]
59 reason = first_error['error']['message']
60 except (ValueError, KeyError, TypeError):
61 pass
62 if reason is None:
63 reason = ''
64 return reason
65
66 def __repr__(self):
67 if self.uri:
68 return '<HttpError %s when requesting %s returned "%s">' % (
69 self.resp.status, self.uri, self._get_reason().strip())
70 else:
71 return '<HttpError %s "%s">' % (self.resp.status, self._get_reason())
72
73 __str__ = __repr__
74
75
76 class InvalidJsonError(Error):
77 """The JSON returned could not be parsed."""
78 pass
79
80
81 class UnknownFileType(Error):
82 """File type unknown or unexpected."""
83 pass
84
85
86 class UnknownLinkType(Error):
87 """Link type unknown or unexpected."""
88 pass
89
90
91 class UnknownApiNameOrVersion(Error):
92 """No API with that name and version exists."""
93 pass
94
95
96 class UnacceptableMimeTypeError(Error):
97 """That is an unacceptable mimetype for this operation."""
98 pass
99
100
101 class MediaUploadSizeError(Error):
102 """Media is larger than the method can accept."""
103 pass
104
105
106 class ResumableUploadError(HttpError):
107 """Error occured during resumable upload."""
108 pass
109
110
111 class InvalidChunkSizeError(Error):
112 """The given chunksize is not valid."""
113 pass
114
115 class InvalidNotificationError(Error):
116 """The channel Notification is invalid."""
117 pass
118
119 class BatchError(HttpError):
120 """Error occured during batch operations."""
121
122 @util.positional(2)
123 def __init__(self, reason, resp=None, content=None):
124 self.resp = resp
125 self.content = content
126 self.reason = reason
127
128 def __repr__(self):
129 return '<BatchError %s "%s">' % (self.resp.status, self.reason)
130
131 __str__ = __repr__
132
133
134 class UnexpectedMethodError(Error):
135 """Exception raised by RequestMockBuilder on unexpected calls."""
136
137 @util.positional(1)
138 def __init__(self, methodId=None):
139 """Constructor for an UnexpectedMethodError."""
140 super(UnexpectedMethodError, self).__init__(
141 'Received unexpected call %s' % methodId)
142
143
144 class UnexpectedBodyError(Error):
145 """Exception raised by RequestMockBuilder on unexpected bodies."""
146
147 def __init__(self, expected, provided):
148 """Constructor for an UnexpectedMethodError."""
149 super(UnexpectedBodyError, self).__init__(
150 'Expected: [%s] - Provided: [%s]' % (expected, provided))
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py
--- a/googleapiclient/errors.py
+++ b/googleapiclient/errors.py
@@ -126,6 +126,9 @@
self.reason = reason
def __repr__(self):
+ if getattr(self.resp, 'status', None) is None:
+ return '<BatchError "%s">' % (self.reason)
+ else:
return '<BatchError %s "%s">' % (self.resp.status, self.reason)
__str__ = __repr__
| {"golden_diff": "diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py\n--- a/googleapiclient/errors.py\n+++ b/googleapiclient/errors.py\n@@ -126,6 +126,9 @@\n self.reason = reason\n \n def __repr__(self):\n+ if getattr(self.resp, 'status', None) is None:\n+ return '<BatchError \"%s\">' % (self.reason)\n+ else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n \n __str__ = __repr__\n", "issue": "BatchError is unprintable using default constructor (one string)\nThis one should be pretty simple, I hope.\n\nHere's the constructor signature: `def __init__(self, reason, resp=None, content=None):`, which doesn't require `resp` to be defined, and I can see it is not defined most of the time, for example, in googleapiclient/http.py.\n\nThen, given the representation method:\n\n```\ndef __repr__(self):\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n```\n\nWhich is also the string method:\n\n```\n__str__ = __repr__\n```\n\nThis results in unprintable exceptions where `resp` is undefined, which is not very helpful when attempting to understand the error (e.g. #164).\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = '[email protected] (Joe Gregorio)'\n\nimport json\n\n# Oauth2client < 3 has the positional helper in 'util', >= 3 has it\n# in '_helpers'.\ntry:\n from oauth2client import util\nexcept ImportError:\n from oauth2client import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode('utf-8'))\n if isinstance(data, dict):\n reason = data['error']['message']\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error['error']['message']\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = ''\n return reason\n\n def __repr__(self):\n if self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status, self.uri, self._get_reason().strip())\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occured during resumable upload.\"\"\"\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n pass\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n pass\n\nclass BatchError(HttpError):\n \"\"\"Error occured during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n 'Received unexpected call %s' % methodId)\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n 'Expected: [%s] - Provided: [%s]' % (expected, provided))\n", "path": "googleapiclient/errors.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = '[email protected] (Joe Gregorio)'\n\nimport json\n\n# Oauth2client < 3 has the positional helper in 'util', >= 3 has it\n# in '_helpers'.\ntry:\n from oauth2client import util\nexcept ImportError:\n from oauth2client import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode('utf-8'))\n if isinstance(data, dict):\n reason = data['error']['message']\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error['error']['message']\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = ''\n return reason\n\n def __repr__(self):\n if self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status, self.uri, self._get_reason().strip())\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occured during resumable upload.\"\"\"\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n pass\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n pass\n\nclass BatchError(HttpError):\n \"\"\"Error occured during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, 'status', None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n 'Received unexpected call %s' % methodId)\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n 'Expected: [%s] - Provided: [%s]' % (expected, provided))\n", "path": "googleapiclient/errors.py"}]} | 1,729 | 124 |
gh_patches_debug_12722 | rasdani/github-patches | git_diff | oppia__oppia-15025 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Topic prerequisite skill checking is broken and prevents topic being published
**Describe the bug**
In a topic, chapters with prerequisite skills outside the topic make the topic unpublishable, because "this skill was not taught in any chapter before it". This behaviour is wrong because the validation error should only be triggered for skills that have been assigned to the topic currently being edited.
**To Reproduce**
Steps to reproduce the behavior:
1. Create topic P (the prerequisite topic) and topic M (the main topic).
2. Create skill S1 and assign it to P. Create skill S2 and assign it to M.
3. Create a chapter in topic M and assign it a prerequisite skill of S1, and an acquired skill of S2.
4. Save all changes.
5. Refresh the page and try to publish the topic. This is not possible because of a validation error: "The skill with id XYZ was specified as a prerequisite for Chapter Name but was not taught in any chapter before it.". See screenshot below:

**Observed behavior**
The topic cannot be published, and further changes to the story/chapter cannot be saved, because the prerequisite skills include ones from outside the topic.
**Expected behavior**
The topic should be publishable. The validation message should **only** occur for skills that have been assigned to the main topic M, and not to other topics.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/controllers/story_editor.py`
Content:
```
1 # Copyright 2018 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Controllers for the story editor."""
16
17 from __future__ import annotations
18
19 from core import feconf
20 from core import utils
21 from core.constants import constants
22 from core.controllers import acl_decorators
23 from core.controllers import base
24 from core.domain import classroom_services
25 from core.domain import skill_services
26 from core.domain import story_domain
27 from core.domain import story_fetchers
28 from core.domain import story_services
29 from core.domain import topic_fetchers
30 from core.domain import topic_services
31
32
33 class StoryEditorPage(base.BaseHandler):
34 """The editor page for a single story."""
35
36 URL_PATH_ARGS_SCHEMAS = {
37 'story_id': {
38 'schema': {
39 'type': 'basestring'
40 },
41 'validators': [{
42 'id': 'has_length',
43 'value': constants.STORY_ID_LENGTH
44 }]
45 }
46 }
47 HANDLER_ARGS_SCHEMAS = {
48 'GET': {}
49 }
50
51 @acl_decorators.can_edit_story
52 def get(self, _):
53 """Handles GET requests."""
54
55 self.render_template('story-editor-page.mainpage.html')
56
57
58 class EditableStoryDataHandler(base.BaseHandler):
59 """A data handler for stories which support writing."""
60
61 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
62
63 def _require_valid_version(self, version_from_payload, story_version):
64 """Check that the payload version matches the given story
65 version.
66 """
67 if version_from_payload is None:
68 raise base.BaseHandler.InvalidInputException(
69 'Invalid POST request: a version must be specified.')
70
71 if version_from_payload != story_version:
72 raise base.BaseHandler.InvalidInputException(
73 'Trying to update version %s of story from version %s, '
74 'which is too old. Please reload the page and try again.'
75 % (story_version, version_from_payload))
76
77 @acl_decorators.can_edit_story
78 def get(self, story_id):
79 """Populates the data on the individual story page."""
80 story = story_fetchers.get_story_by_id(story_id, strict=False)
81 topic_id = story.corresponding_topic_id
82 topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
83 skill_ids = topic.get_all_skill_ids()
84 for node in story.story_contents.nodes:
85 for skill_id in node.prerequisite_skill_ids:
86 if skill_id not in skill_ids:
87 skill_ids.append(skill_id)
88
89 skill_summaries = skill_services.get_multi_skill_summaries(skill_ids)
90 skill_summary_dicts = [summary.to_dict() for summary in skill_summaries]
91 classroom_url_fragment = (
92 classroom_services.get_classroom_url_fragment_for_topic_id(
93 topic.id))
94
95 for story_reference in topic.canonical_story_references:
96 if story_reference.story_id == story_id:
97 story_is_published = story_reference.story_is_published
98
99 self.values.update({
100 'story': story.to_dict(),
101 'topic_name': topic.name,
102 'story_is_published': story_is_published,
103 'skill_summaries': skill_summary_dicts,
104 'topic_url_fragment': topic.url_fragment,
105 'classroom_url_fragment': classroom_url_fragment
106 })
107
108 self.render_json(self.values)
109
110 @acl_decorators.can_edit_story
111 def put(self, story_id):
112 """Updates properties of the given story."""
113 story = story_fetchers.get_story_by_id(story_id, strict=False)
114
115 version = self.payload.get('version')
116 self._require_valid_version(version, story.version)
117
118 commit_message = self.payload.get('commit_message')
119
120 if commit_message is None:
121 raise self.InvalidInputException(
122 'Expected a commit message but received none.')
123
124 if len(commit_message) > constants.MAX_COMMIT_MESSAGE_LENGTH:
125 raise self.InvalidInputException(
126 'Commit messages must be at most %s characters long.'
127 % constants.MAX_COMMIT_MESSAGE_LENGTH)
128
129 change_dicts = self.payload.get('change_dicts')
130 change_list = [
131 story_domain.StoryChange(change_dict)
132 for change_dict in change_dicts
133 ]
134 try:
135 # Update the Story and its corresponding TopicSummary.
136 topic_services.update_story_and_topic_summary(
137 self.user_id, story_id, change_list, commit_message,
138 story.corresponding_topic_id)
139 except utils.ValidationError as e:
140 raise self.InvalidInputException(e)
141
142 story_dict = story_fetchers.get_story_by_id(story_id).to_dict()
143
144 self.values.update({
145 'story': story_dict
146 })
147
148 self.render_json(self.values)
149
150 @acl_decorators.can_delete_story
151 def delete(self, story_id):
152 """Handles Delete requests."""
153 story_services.delete_story(self.user_id, story_id)
154 self.render_json(self.values)
155
156
157 class StoryPublishHandler(base.BaseHandler):
158 """A data handler for publishing and unpublishing stories."""
159
160 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
161 URL_PATH_ARGS_SCHEMAS = {
162 'story_id': {
163 'schema': {
164 'type': 'basestring'
165 },
166 'validators': [{
167 'id': 'has_length',
168 'value': constants.STORY_ID_LENGTH
169 }]
170 }
171 }
172 HANDLER_ARGS_SCHEMAS = {
173 'PUT': {
174 'new_story_status_is_public': {
175 'schema': {
176 'type': 'bool'
177 },
178 }
179 }
180 }
181
182 @acl_decorators.can_edit_story
183 def put(self, story_id):
184 """Published/unpublished given story."""
185 story = story_fetchers.get_story_by_id(story_id, strict=False)
186 topic_id = story.corresponding_topic_id
187
188 new_story_status_is_public = self.normalized_payload.get(
189 'new_story_status_is_public')
190
191 if new_story_status_is_public:
192 topic_services.publish_story(topic_id, story_id, self.user_id)
193 else:
194 topic_services.unpublish_story(topic_id, story_id, self.user_id)
195
196 self.render_json(self.values)
197
198
199 class ValidateExplorationsHandler(base.BaseHandler):
200 """A data handler for validating the explorations in a story."""
201
202 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
203
204 @acl_decorators.can_edit_story
205 def get(self, _):
206 """Handler that receives a list of exploration IDs, checks whether the
207 corresponding explorations are supported on mobile and returns the
208 validation error messages (if any).
209 """
210 comma_separated_exp_ids = self.request.get('comma_separated_exp_ids')
211 if not comma_separated_exp_ids:
212 raise self.InvalidInputException(
213 'Expected comma_separated_exp_ids parameter to be present.')
214 exp_ids = comma_separated_exp_ids.split(',')
215 validation_error_messages = (
216 story_services.validate_explorations_for_story(exp_ids, False))
217 self.values.update({
218 'validation_error_messages': validation_error_messages
219 })
220 self.render_json(self.values)
221
222
223 class StoryUrlFragmentHandler(base.BaseHandler):
224 """A data handler for checking if a story with given url fragment exists.
225 """
226
227 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
228 URL_PATH_ARGS_SCHEMAS = {
229 'story_url_fragment': constants.SCHEMA_FOR_STORY_URL_FRAGMENTS
230 }
231 HANDLER_ARGS_SCHEMAS = {
232 'GET': {}
233 }
234
235 @acl_decorators.open_access
236 def get(self, story_url_fragment):
237 """Handler that receives a story url fragment and checks whether
238 a story with the same url fragment exists or not.
239 """
240 self.values.update({
241 'story_url_fragment_exists': (
242 story_services.does_story_exist_with_url_fragment(
243 story_url_fragment))
244 })
245 self.render_json(self.values)
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/controllers/story_editor.py b/core/controllers/story_editor.py
--- a/core/controllers/story_editor.py
+++ b/core/controllers/story_editor.py
@@ -81,10 +81,6 @@
topic_id = story.corresponding_topic_id
topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
skill_ids = topic.get_all_skill_ids()
- for node in story.story_contents.nodes:
- for skill_id in node.prerequisite_skill_ids:
- if skill_id not in skill_ids:
- skill_ids.append(skill_id)
skill_summaries = skill_services.get_multi_skill_summaries(skill_ids)
skill_summary_dicts = [summary.to_dict() for summary in skill_summaries]
| {"golden_diff": "diff --git a/core/controllers/story_editor.py b/core/controllers/story_editor.py\n--- a/core/controllers/story_editor.py\n+++ b/core/controllers/story_editor.py\n@@ -81,10 +81,6 @@\n topic_id = story.corresponding_topic_id\n topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)\n skill_ids = topic.get_all_skill_ids()\n- for node in story.story_contents.nodes:\n- for skill_id in node.prerequisite_skill_ids:\n- if skill_id not in skill_ids:\n- skill_ids.append(skill_id)\n \n skill_summaries = skill_services.get_multi_skill_summaries(skill_ids)\n skill_summary_dicts = [summary.to_dict() for summary in skill_summaries]\n", "issue": "Topic prerequisite skill checking is broken and prevents topic being published\n**Describe the bug**\r\n\r\nIn a topic, chapters with prerequisite skills outside the topic make the topic unpublishable, because \"this skill was not taught in any chapter before it\". This behaviour is wrong because the validation error should only be triggered for skills that have been assigned to the topic currently being edited.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n 1. Create topic P (the prerequisite topic) and topic M (the main topic).\r\n 2. Create skill S1 and assign it to P. Create skill S2 and assign it to M.\r\n 3. Create a chapter in topic M and assign it a prerequisite skill of S1, and an acquired skill of S2.\r\n 4. Save all changes.\r\n 5. Refresh the page and try to publish the topic. This is not possible because of a validation error: \"The skill with id XYZ was specified as a prerequisite for Chapter Name but was not taught in any chapter before it.\". See screenshot below:\r\n \r\n\r\n\r\n**Observed behavior**\r\nThe topic cannot be published, and further changes to the story/chapter cannot be saved, because the prerequisite skills include ones from outside the topic.\r\n\r\n**Expected behavior**\r\nThe topic should be publishable. The validation message should **only** occur for skills that have been assigned to the main topic M, and not to other topics.\r\n\n", "before_files": [{"content": "# Copyright 2018 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the story editor.\"\"\"\n\nfrom __future__ import annotations\n\nfrom core import feconf\nfrom core import utils\nfrom core.constants import constants\nfrom core.controllers import acl_decorators\nfrom core.controllers import base\nfrom core.domain import classroom_services\nfrom core.domain import skill_services\nfrom core.domain import story_domain\nfrom core.domain import story_fetchers\nfrom core.domain import story_services\nfrom core.domain import topic_fetchers\nfrom core.domain import topic_services\n\n\nclass StoryEditorPage(base.BaseHandler):\n \"\"\"The editor page for a single story.\"\"\"\n\n URL_PATH_ARGS_SCHEMAS = {\n 'story_id': {\n 'schema': {\n 'type': 'basestring'\n },\n 'validators': [{\n 'id': 'has_length',\n 'value': constants.STORY_ID_LENGTH\n }]\n }\n }\n HANDLER_ARGS_SCHEMAS = {\n 'GET': {}\n }\n\n @acl_decorators.can_edit_story\n def get(self, _):\n \"\"\"Handles GET requests.\"\"\"\n\n self.render_template('story-editor-page.mainpage.html')\n\n\nclass EditableStoryDataHandler(base.BaseHandler):\n \"\"\"A data handler for stories which support writing.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n def _require_valid_version(self, version_from_payload, story_version):\n \"\"\"Check that the payload version matches the given story\n version.\n \"\"\"\n if version_from_payload is None:\n raise base.BaseHandler.InvalidInputException(\n 'Invalid POST request: a version must be specified.')\n\n if version_from_payload != story_version:\n raise base.BaseHandler.InvalidInputException(\n 'Trying to update version %s of story from version %s, '\n 'which is too old. Please reload the page and try again.'\n % (story_version, version_from_payload))\n\n @acl_decorators.can_edit_story\n def get(self, story_id):\n \"\"\"Populates the data on the individual story page.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n topic_id = story.corresponding_topic_id\n topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)\n skill_ids = topic.get_all_skill_ids()\n for node in story.story_contents.nodes:\n for skill_id in node.prerequisite_skill_ids:\n if skill_id not in skill_ids:\n skill_ids.append(skill_id)\n\n skill_summaries = skill_services.get_multi_skill_summaries(skill_ids)\n skill_summary_dicts = [summary.to_dict() for summary in skill_summaries]\n classroom_url_fragment = (\n classroom_services.get_classroom_url_fragment_for_topic_id(\n topic.id))\n\n for story_reference in topic.canonical_story_references:\n if story_reference.story_id == story_id:\n story_is_published = story_reference.story_is_published\n\n self.values.update({\n 'story': story.to_dict(),\n 'topic_name': topic.name,\n 'story_is_published': story_is_published,\n 'skill_summaries': skill_summary_dicts,\n 'topic_url_fragment': topic.url_fragment,\n 'classroom_url_fragment': classroom_url_fragment\n })\n\n self.render_json(self.values)\n\n @acl_decorators.can_edit_story\n def put(self, story_id):\n \"\"\"Updates properties of the given story.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n\n version = self.payload.get('version')\n self._require_valid_version(version, story.version)\n\n commit_message = self.payload.get('commit_message')\n\n if commit_message is None:\n raise self.InvalidInputException(\n 'Expected a commit message but received none.')\n\n if len(commit_message) > constants.MAX_COMMIT_MESSAGE_LENGTH:\n raise self.InvalidInputException(\n 'Commit messages must be at most %s characters long.'\n % constants.MAX_COMMIT_MESSAGE_LENGTH)\n\n change_dicts = self.payload.get('change_dicts')\n change_list = [\n story_domain.StoryChange(change_dict)\n for change_dict in change_dicts\n ]\n try:\n # Update the Story and its corresponding TopicSummary.\n topic_services.update_story_and_topic_summary(\n self.user_id, story_id, change_list, commit_message,\n story.corresponding_topic_id)\n except utils.ValidationError as e:\n raise self.InvalidInputException(e)\n\n story_dict = story_fetchers.get_story_by_id(story_id).to_dict()\n\n self.values.update({\n 'story': story_dict\n })\n\n self.render_json(self.values)\n\n @acl_decorators.can_delete_story\n def delete(self, story_id):\n \"\"\"Handles Delete requests.\"\"\"\n story_services.delete_story(self.user_id, story_id)\n self.render_json(self.values)\n\n\nclass StoryPublishHandler(base.BaseHandler):\n \"\"\"A data handler for publishing and unpublishing stories.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n URL_PATH_ARGS_SCHEMAS = {\n 'story_id': {\n 'schema': {\n 'type': 'basestring'\n },\n 'validators': [{\n 'id': 'has_length',\n 'value': constants.STORY_ID_LENGTH\n }]\n }\n }\n HANDLER_ARGS_SCHEMAS = {\n 'PUT': {\n 'new_story_status_is_public': {\n 'schema': {\n 'type': 'bool'\n },\n }\n }\n }\n\n @acl_decorators.can_edit_story\n def put(self, story_id):\n \"\"\"Published/unpublished given story.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n topic_id = story.corresponding_topic_id\n\n new_story_status_is_public = self.normalized_payload.get(\n 'new_story_status_is_public')\n\n if new_story_status_is_public:\n topic_services.publish_story(topic_id, story_id, self.user_id)\n else:\n topic_services.unpublish_story(topic_id, story_id, self.user_id)\n\n self.render_json(self.values)\n\n\nclass ValidateExplorationsHandler(base.BaseHandler):\n \"\"\"A data handler for validating the explorations in a story.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_edit_story\n def get(self, _):\n \"\"\"Handler that receives a list of exploration IDs, checks whether the\n corresponding explorations are supported on mobile and returns the\n validation error messages (if any).\n \"\"\"\n comma_separated_exp_ids = self.request.get('comma_separated_exp_ids')\n if not comma_separated_exp_ids:\n raise self.InvalidInputException(\n 'Expected comma_separated_exp_ids parameter to be present.')\n exp_ids = comma_separated_exp_ids.split(',')\n validation_error_messages = (\n story_services.validate_explorations_for_story(exp_ids, False))\n self.values.update({\n 'validation_error_messages': validation_error_messages\n })\n self.render_json(self.values)\n\n\nclass StoryUrlFragmentHandler(base.BaseHandler):\n \"\"\"A data handler for checking if a story with given url fragment exists.\n \"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n URL_PATH_ARGS_SCHEMAS = {\n 'story_url_fragment': constants.SCHEMA_FOR_STORY_URL_FRAGMENTS\n }\n HANDLER_ARGS_SCHEMAS = {\n 'GET': {}\n }\n\n @acl_decorators.open_access\n def get(self, story_url_fragment):\n \"\"\"Handler that receives a story url fragment and checks whether\n a story with the same url fragment exists or not.\n \"\"\"\n self.values.update({\n 'story_url_fragment_exists': (\n story_services.does_story_exist_with_url_fragment(\n story_url_fragment))\n })\n self.render_json(self.values)\n", "path": "core/controllers/story_editor.py"}], "after_files": [{"content": "# Copyright 2018 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the story editor.\"\"\"\n\nfrom __future__ import annotations\n\nfrom core import feconf\nfrom core import utils\nfrom core.constants import constants\nfrom core.controllers import acl_decorators\nfrom core.controllers import base\nfrom core.domain import classroom_services\nfrom core.domain import skill_services\nfrom core.domain import story_domain\nfrom core.domain import story_fetchers\nfrom core.domain import story_services\nfrom core.domain import topic_fetchers\nfrom core.domain import topic_services\n\n\nclass StoryEditorPage(base.BaseHandler):\n \"\"\"The editor page for a single story.\"\"\"\n\n URL_PATH_ARGS_SCHEMAS = {\n 'story_id': {\n 'schema': {\n 'type': 'basestring'\n },\n 'validators': [{\n 'id': 'has_length',\n 'value': constants.STORY_ID_LENGTH\n }]\n }\n }\n HANDLER_ARGS_SCHEMAS = {\n 'GET': {}\n }\n\n @acl_decorators.can_edit_story\n def get(self, _):\n \"\"\"Handles GET requests.\"\"\"\n\n self.render_template('story-editor-page.mainpage.html')\n\n\nclass EditableStoryDataHandler(base.BaseHandler):\n \"\"\"A data handler for stories which support writing.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n def _require_valid_version(self, version_from_payload, story_version):\n \"\"\"Check that the payload version matches the given story\n version.\n \"\"\"\n if version_from_payload is None:\n raise base.BaseHandler.InvalidInputException(\n 'Invalid POST request: a version must be specified.')\n\n if version_from_payload != story_version:\n raise base.BaseHandler.InvalidInputException(\n 'Trying to update version %s of story from version %s, '\n 'which is too old. Please reload the page and try again.'\n % (story_version, version_from_payload))\n\n @acl_decorators.can_edit_story\n def get(self, story_id):\n \"\"\"Populates the data on the individual story page.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n topic_id = story.corresponding_topic_id\n topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)\n skill_ids = topic.get_all_skill_ids()\n\n skill_summaries = skill_services.get_multi_skill_summaries(skill_ids)\n skill_summary_dicts = [summary.to_dict() for summary in skill_summaries]\n classroom_url_fragment = (\n classroom_services.get_classroom_url_fragment_for_topic_id(\n topic.id))\n\n for story_reference in topic.canonical_story_references:\n if story_reference.story_id == story_id:\n story_is_published = story_reference.story_is_published\n\n self.values.update({\n 'story': story.to_dict(),\n 'topic_name': topic.name,\n 'story_is_published': story_is_published,\n 'skill_summaries': skill_summary_dicts,\n 'topic_url_fragment': topic.url_fragment,\n 'classroom_url_fragment': classroom_url_fragment\n })\n\n self.render_json(self.values)\n\n @acl_decorators.can_edit_story\n def put(self, story_id):\n \"\"\"Updates properties of the given story.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n\n version = self.payload.get('version')\n self._require_valid_version(version, story.version)\n\n commit_message = self.payload.get('commit_message')\n\n if commit_message is None:\n raise self.InvalidInputException(\n 'Expected a commit message but received none.')\n\n if len(commit_message) > constants.MAX_COMMIT_MESSAGE_LENGTH:\n raise self.InvalidInputException(\n 'Commit messages must be at most %s characters long.'\n % constants.MAX_COMMIT_MESSAGE_LENGTH)\n\n change_dicts = self.payload.get('change_dicts')\n change_list = [\n story_domain.StoryChange(change_dict)\n for change_dict in change_dicts\n ]\n try:\n # Update the Story and its corresponding TopicSummary.\n topic_services.update_story_and_topic_summary(\n self.user_id, story_id, change_list, commit_message,\n story.corresponding_topic_id)\n except utils.ValidationError as e:\n raise self.InvalidInputException(e)\n\n story_dict = story_fetchers.get_story_by_id(story_id).to_dict()\n\n self.values.update({\n 'story': story_dict\n })\n\n self.render_json(self.values)\n\n @acl_decorators.can_delete_story\n def delete(self, story_id):\n \"\"\"Handles Delete requests.\"\"\"\n story_services.delete_story(self.user_id, story_id)\n self.render_json(self.values)\n\n\nclass StoryPublishHandler(base.BaseHandler):\n \"\"\"A data handler for publishing and unpublishing stories.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n URL_PATH_ARGS_SCHEMAS = {\n 'story_id': {\n 'schema': {\n 'type': 'basestring'\n },\n 'validators': [{\n 'id': 'has_length',\n 'value': constants.STORY_ID_LENGTH\n }]\n }\n }\n HANDLER_ARGS_SCHEMAS = {\n 'PUT': {\n 'new_story_status_is_public': {\n 'schema': {\n 'type': 'bool'\n },\n }\n }\n }\n\n @acl_decorators.can_edit_story\n def put(self, story_id):\n \"\"\"Published/unpublished given story.\"\"\"\n story = story_fetchers.get_story_by_id(story_id, strict=False)\n topic_id = story.corresponding_topic_id\n\n new_story_status_is_public = self.normalized_payload.get(\n 'new_story_status_is_public')\n\n if new_story_status_is_public:\n topic_services.publish_story(topic_id, story_id, self.user_id)\n else:\n topic_services.unpublish_story(topic_id, story_id, self.user_id)\n\n self.render_json(self.values)\n\n\nclass ValidateExplorationsHandler(base.BaseHandler):\n \"\"\"A data handler for validating the explorations in a story.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_edit_story\n def get(self, _):\n \"\"\"Handler that receives a list of exploration IDs, checks whether the\n corresponding explorations are supported on mobile and returns the\n validation error messages (if any).\n \"\"\"\n comma_separated_exp_ids = self.request.get('comma_separated_exp_ids')\n if not comma_separated_exp_ids:\n raise self.InvalidInputException(\n 'Expected comma_separated_exp_ids parameter to be present.')\n exp_ids = comma_separated_exp_ids.split(',')\n validation_error_messages = (\n story_services.validate_explorations_for_story(exp_ids, False))\n self.values.update({\n 'validation_error_messages': validation_error_messages\n })\n self.render_json(self.values)\n\n\nclass StoryUrlFragmentHandler(base.BaseHandler):\n \"\"\"A data handler for checking if a story with given url fragment exists.\n \"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n URL_PATH_ARGS_SCHEMAS = {\n 'story_url_fragment': constants.SCHEMA_FOR_STORY_URL_FRAGMENTS\n }\n HANDLER_ARGS_SCHEMAS = {\n 'GET': {}\n }\n\n @acl_decorators.open_access\n def get(self, story_url_fragment):\n \"\"\"Handler that receives a story url fragment and checks whether\n a story with the same url fragment exists or not.\n \"\"\"\n self.values.update({\n 'story_url_fragment_exists': (\n story_services.does_story_exist_with_url_fragment(\n story_url_fragment))\n })\n self.render_json(self.values)\n", "path": "core/controllers/story_editor.py"}]} | 3,032 | 155 |
gh_patches_debug_14635 | rasdani/github-patches | git_diff | xorbitsai__inference-192 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: support vicuna-v1.3 33B
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/model/llm/__init__.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def install():
17 from .. import MODEL_FAMILIES, ModelFamily
18 from .chatglm import ChatglmCppChatModel
19 from .core import LlamaCppModel
20 from .orca import OrcaMiniGgml
21 from .pytorch.baichuan import BaichuanPytorch
22 from .pytorch.vicuna import VicunaCensoredPytorch
23 from .vicuna import VicunaCensoredGgml
24 from .wizardlm import WizardlmGgml
25
26 baichuan_url_generator = lambda model_size, quantization: (
27 f"https://huggingface.co/TheBloke/baichuan-llama-{model_size}B-GGML/resolve/main/"
28 f"baichuan-llama-{model_size}b.ggmlv3.{quantization}.bin"
29 )
30 MODEL_FAMILIES.append(
31 ModelFamily(
32 model_name="baichuan",
33 model_format="ggmlv3",
34 model_sizes_in_billions=[7],
35 quantizations=[
36 "q2_K",
37 "q3_K_L",
38 "q3_K_M",
39 "q3_K_S",
40 "q4_0",
41 "q4_1",
42 "q4_K_M",
43 "q4_K_S",
44 "q5_0",
45 "q5_1",
46 "q5_K_M",
47 "q5_K_S",
48 "q6_K",
49 "q8_0",
50 ],
51 url_generator=baichuan_url_generator,
52 cls=LlamaCppModel,
53 )
54 )
55
56 wizardlm_v1_0_url_generator = lambda model_size, quantization: (
57 f"https://huggingface.co/TheBloke/WizardLM-{model_size}B-V1.0-Uncensored-GGML/resolve/main/"
58 f"wizardlm-{model_size}b-v1.0-uncensored.ggmlv3.{quantization}.bin"
59 )
60 MODEL_FAMILIES.append(
61 ModelFamily(
62 model_name="wizardlm-v1.0",
63 model_sizes_in_billions=[7, 13, 33],
64 model_format="ggmlv3",
65 quantizations=[
66 "q2_K",
67 "q3_K_L",
68 "q3_K_M",
69 "q3_K_S",
70 "q4_0",
71 "q4_1",
72 "q4_K_M",
73 "q4_K_S",
74 "q5_0",
75 "q5_1",
76 "q5_K_M",
77 "q5_K_S",
78 "q6_K",
79 "q8_0",
80 ],
81 url_generator=wizardlm_v1_0_url_generator,
82 cls=WizardlmGgml,
83 ),
84 )
85
86 wizardlm_v1_1_url_generator = lambda model_size, quantization: (
87 f"https://huggingface.co/TheBloke/WizardLM-{model_size}B-V1.1-GGML/resolve/main/"
88 f"wizardlm-{model_size}b-v1.1.ggmlv3.{quantization}.bin"
89 )
90 MODEL_FAMILIES.append(
91 ModelFamily(
92 model_name="wizardlm-v1.1",
93 model_sizes_in_billions=[13],
94 model_format="ggmlv3",
95 quantizations=[
96 "q2_K",
97 "q3_K_L",
98 "q3_K_M",
99 "q3_K_S",
100 "q4_0",
101 "q4_1",
102 "q4_K_M",
103 "q4_K_S",
104 "q5_0",
105 "q5_1",
106 "q5_K_M",
107 "q5_K_S",
108 "q6_K",
109 "q8_0",
110 ],
111 url_generator=wizardlm_v1_1_url_generator,
112 cls=VicunaCensoredGgml, # according to https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML
113 ),
114 )
115
116 vicuna_v1_3_url_generator = lambda model_size, quantization: (
117 "https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML/resolve/main/"
118 f"vicuna-7b-v1.3.ggmlv3.{quantization}.bin"
119 if model_size == 7
120 else (
121 "https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML/resolve/main/"
122 f"vicuna-13b-v1.3.0.ggmlv3.{quantization}.bin"
123 )
124 )
125 MODEL_FAMILIES.append(
126 ModelFamily(
127 model_name="vicuna-v1.3",
128 model_sizes_in_billions=[7, 13],
129 model_format="ggmlv3",
130 quantizations=[
131 "q2_K",
132 "q3_K_L",
133 "q3_K_M",
134 "q3_K_S",
135 "q4_0",
136 "q4_1",
137 "q4_K_M",
138 "q4_K_S",
139 "q5_0",
140 "q5_1",
141 "q5_K_M",
142 "q5_K_S",
143 "q6_K",
144 "q8_0",
145 ],
146 url_generator=vicuna_v1_3_url_generator,
147 cls=VicunaCensoredGgml,
148 ),
149 )
150
151 orca_url_generator = lambda model_size, quantization: (
152 f"https://huggingface.co/TheBloke/orca_mini_{model_size}B-GGML/resolve/main/orca-mini-"
153 f"{model_size}b.ggmlv3.{quantization}.bin"
154 )
155 MODEL_FAMILIES.append(
156 ModelFamily(
157 model_name="orca",
158 model_sizes_in_billions=[3, 7, 13],
159 model_format="ggmlv3",
160 quantizations=[
161 "q4_0",
162 "q4_1",
163 "q5_0",
164 "q5_1",
165 "q8_0",
166 ],
167 url_generator=orca_url_generator,
168 cls=OrcaMiniGgml,
169 )
170 )
171
172 chatglm_url_generator = lambda model_size, quantization: (
173 f"https://huggingface.co/Xorbits/chatglm-{model_size}B-GGML/resolve/main/"
174 f"chatglm-ggml-{quantization}.bin"
175 )
176 MODEL_FAMILIES.append(
177 ModelFamily(
178 model_name="chatglm",
179 model_sizes_in_billions=[6],
180 model_format="ggmlv3",
181 quantizations=[
182 "q4_0",
183 "q4_1",
184 "q5_0",
185 "q5_1",
186 "q8_0",
187 ],
188 url_generator=chatglm_url_generator,
189 cls=ChatglmCppChatModel,
190 )
191 )
192
193 chatglm2_url_generator = lambda model_size, quantization: (
194 f"https://huggingface.co/Xorbits/chatglm2-{model_size}B-GGML/resolve/main/"
195 f"chatglm2-ggml-{quantization}.bin"
196 )
197 MODEL_FAMILIES.append(
198 ModelFamily(
199 model_name="chatglm2",
200 model_sizes_in_billions=[6],
201 model_format="ggmlv3",
202 quantizations=[
203 "q4_0",
204 "q4_1",
205 "q5_0",
206 "q5_1",
207 "q8_0",
208 ],
209 url_generator=chatglm2_url_generator,
210 cls=ChatglmCppChatModel,
211 )
212 )
213
214 MODEL_FAMILIES.append(
215 ModelFamily(
216 model_name="baichuan-inc/Baichuan-7B",
217 model_sizes_in_billions=[7],
218 model_format="pytorch",
219 quantizations=None,
220 url_generator=None,
221 cls=BaichuanPytorch,
222 ),
223 )
224
225 MODEL_FAMILIES.append(
226 ModelFamily(
227 model_name="lmsys/vicuna-7b-v1.3",
228 model_sizes_in_billions=[7, 13],
229 model_format="pytorch",
230 quantizations=None,
231 url_generator=None,
232 cls=VicunaCensoredPytorch,
233 ),
234 )
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py
--- a/xinference/model/llm/__init__.py
+++ b/xinference/model/llm/__init__.py
@@ -88,12 +88,14 @@
else (
"https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML/resolve/main/"
f"vicuna-13b-v1.3.0.ggmlv3.{quantization}.bin"
+ if model_size == 13
+ else f"https://huggingface.co/TheBloke/vicuna-33B-GGML/resolve/main/vicuna-33b.ggmlv3.{quantization}.bin"
)
)
MODEL_FAMILIES.append(
ModelFamily(
model_name="vicuna-v1.3",
- model_sizes_in_billions=[7, 13],
+ model_sizes_in_billions=[7, 13, 33],
model_format="ggmlv3",
quantizations=[
"q2_K",
| {"golden_diff": "diff --git a/xinference/model/llm/__init__.py b/xinference/model/llm/__init__.py\n--- a/xinference/model/llm/__init__.py\n+++ b/xinference/model/llm/__init__.py\n@@ -88,12 +88,14 @@\n else (\n \"https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML/resolve/main/\"\n f\"vicuna-13b-v1.3.0.ggmlv3.{quantization}.bin\"\n+ if model_size == 13\n+ else f\"https://huggingface.co/TheBloke/vicuna-33B-GGML/resolve/main/vicuna-33b.ggmlv3.{quantization}.bin\"\n )\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"vicuna-v1.3\",\n- model_sizes_in_billions=[7, 13],\n+ model_sizes_in_billions=[7, 13, 33],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n", "issue": "FEAT: support vicuna-v1.3 33B\n\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef install():\n from .. import MODEL_FAMILIES, ModelFamily\n from .chatglm import ChatglmCppChatModel\n from .core import LlamaCppModel\n from .orca import OrcaMiniGgml\n from .pytorch.baichuan import BaichuanPytorch\n from .pytorch.vicuna import VicunaCensoredPytorch\n from .vicuna import VicunaCensoredGgml\n from .wizardlm import WizardlmGgml\n\n baichuan_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/baichuan-llama-{model_size}B-GGML/resolve/main/\"\n f\"baichuan-llama-{model_size}b.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"baichuan\",\n model_format=\"ggmlv3\",\n model_sizes_in_billions=[7],\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=baichuan_url_generator,\n cls=LlamaCppModel,\n )\n )\n\n wizardlm_v1_0_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/WizardLM-{model_size}B-V1.0-Uncensored-GGML/resolve/main/\"\n f\"wizardlm-{model_size}b-v1.0-uncensored.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"wizardlm-v1.0\",\n model_sizes_in_billions=[7, 13, 33],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=wizardlm_v1_0_url_generator,\n cls=WizardlmGgml,\n ),\n )\n\n wizardlm_v1_1_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/WizardLM-{model_size}B-V1.1-GGML/resolve/main/\"\n f\"wizardlm-{model_size}b-v1.1.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"wizardlm-v1.1\",\n model_sizes_in_billions=[13],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=wizardlm_v1_1_url_generator,\n cls=VicunaCensoredGgml, # according to https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GGML\n ),\n )\n\n vicuna_v1_3_url_generator = lambda model_size, quantization: (\n \"https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML/resolve/main/\"\n f\"vicuna-7b-v1.3.ggmlv3.{quantization}.bin\"\n if model_size == 7\n else (\n \"https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML/resolve/main/\"\n f\"vicuna-13b-v1.3.0.ggmlv3.{quantization}.bin\"\n )\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"vicuna-v1.3\",\n model_sizes_in_billions=[7, 13],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=vicuna_v1_3_url_generator,\n cls=VicunaCensoredGgml,\n ),\n )\n\n orca_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/orca_mini_{model_size}B-GGML/resolve/main/orca-mini-\"\n f\"{model_size}b.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"orca\",\n model_sizes_in_billions=[3, 7, 13],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=orca_url_generator,\n cls=OrcaMiniGgml,\n )\n )\n\n chatglm_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/Xorbits/chatglm-{model_size}B-GGML/resolve/main/\"\n f\"chatglm-ggml-{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"chatglm\",\n model_sizes_in_billions=[6],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=chatglm_url_generator,\n cls=ChatglmCppChatModel,\n )\n )\n\n chatglm2_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/Xorbits/chatglm2-{model_size}B-GGML/resolve/main/\"\n f\"chatglm2-ggml-{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"chatglm2\",\n model_sizes_in_billions=[6],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=chatglm2_url_generator,\n cls=ChatglmCppChatModel,\n )\n )\n\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"baichuan-inc/Baichuan-7B\",\n model_sizes_in_billions=[7],\n model_format=\"pytorch\",\n quantizations=None,\n url_generator=None,\n cls=BaichuanPytorch,\n ),\n )\n\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"lmsys/vicuna-7b-v1.3\",\n model_sizes_in_billions=[7, 13],\n model_format=\"pytorch\",\n quantizations=None,\n url_generator=None,\n cls=VicunaCensoredPytorch,\n ),\n )\n", "path": "xinference/model/llm/__init__.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef install():\n from .. import MODEL_FAMILIES, ModelFamily\n from .chatglm import ChatglmCppChatModel\n from .core import LlamaCppModel\n from .orca import OrcaMiniGgml\n from .vicuna import VicunaCensoredGgml\n from .wizardlm import WizardlmGgml\n\n baichuan_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/baichuan-llama-{model_size}B-GGML/resolve/main/\"\n f\"baichuan-llama-{model_size}b.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"baichuan\",\n model_format=\"ggmlv3\",\n model_sizes_in_billions=[7],\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=baichuan_url_generator,\n cls=LlamaCppModel,\n )\n )\n\n wizardlm_v1_0_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/WizardLM-{model_size}B-V1.0-Uncensored-GGML/resolve/main/\"\n f\"wizardlm-{model_size}b-v1.0-uncensored.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"wizardlm-v1.0\",\n model_sizes_in_billions=[7, 13, 33],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=wizardlm_v1_0_url_generator,\n cls=WizardlmGgml,\n ),\n )\n\n vicuna_v1_3_url_generator = lambda model_size, quantization: (\n \"https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML/resolve/main/\"\n f\"vicuna-7b-v1.3.ggmlv3.{quantization}.bin\"\n if model_size == 7\n else (\n \"https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML/resolve/main/\"\n f\"vicuna-13b-v1.3.0.ggmlv3.{quantization}.bin\"\n if model_size == 13\n else f\"https://huggingface.co/TheBloke/vicuna-33B-GGML/resolve/main/vicuna-33b.ggmlv3.{quantization}.bin\"\n )\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"vicuna-v1.3\",\n model_sizes_in_billions=[7, 13, 33],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q2_K\",\n \"q3_K_L\",\n \"q3_K_M\",\n \"q3_K_S\",\n \"q4_0\",\n \"q4_1\",\n \"q4_K_M\",\n \"q4_K_S\",\n \"q5_0\",\n \"q5_1\",\n \"q5_K_M\",\n \"q5_K_S\",\n \"q6_K\",\n \"q8_0\",\n ],\n url_generator=vicuna_v1_3_url_generator,\n cls=VicunaCensoredGgml,\n ),\n )\n\n orca_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/TheBloke/orca_mini_{model_size}B-GGML/resolve/main/orca-mini-\"\n f\"{model_size}b.ggmlv3.{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"orca\",\n model_sizes_in_billions=[3, 7, 13],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=orca_url_generator,\n cls=OrcaMiniGgml,\n )\n )\n\n chatglm_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/Xorbits/chatglm-{model_size}B-GGML/resolve/main/\"\n f\"chatglm-ggml-{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"chatglm\",\n model_sizes_in_billions=[6],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=chatglm_url_generator,\n cls=ChatglmCppChatModel,\n )\n )\n\n chatglm2_url_generator = lambda model_size, quantization: (\n f\"https://huggingface.co/Xorbits/chatglm2-{model_size}B-GGML/resolve/main/\"\n f\"chatglm2-ggml-{quantization}.bin\"\n )\n MODEL_FAMILIES.append(\n ModelFamily(\n model_name=\"chatglm2\",\n model_sizes_in_billions=[6],\n model_format=\"ggmlv3\",\n quantizations=[\n \"q4_0\",\n \"q4_1\",\n \"q5_0\",\n \"q5_1\",\n \"q8_0\",\n ],\n url_generator=chatglm2_url_generator,\n cls=ChatglmCppChatModel,\n )\n )\n", "path": "xinference/model/llm/__init__.py"}]} | 2,825 | 259 |
gh_patches_debug_8817 | rasdani/github-patches | git_diff | svthalia__concrexit-1683 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValueError: day is out of range for month
In GitLab by _thaliatechnicie on Mar 4, 2020, 19:20
Sentry Issue: [CONCREXIT-24](https://sentry.io/organizations/thalia/issues/1538288408/?referrer=gitlab_integration)
```
ValueError: day is out of range for month
(11 additional frame(s) were not displayed)
...
File "rest_framework/serializers.py", line 260, in data
self._data = self.to_representation(self.instance)
File "rest_framework/serializers.py", line 529, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/fields.py", line 1905, in to_representation
return method(value)
File "members/api/serializers.py", line 93, in _achievements
return member_achievements(instance.user)
File "members/services.py", line 72, in member_achievements
earliest = earliest.replace(year=earliest.year + mentor_year.year)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/members/services.py`
Content:
```
1 """Services defined in the members package."""
2 from datetime import date, datetime
3 from typing import Callable, List, Dict, Any
4
5 from django.conf import settings
6 from django.db.models import Q, Count
7 from django.utils import timezone
8 from django.utils.translation import gettext
9
10 from members import emails
11 from members.models import Membership, Member
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def _member_group_memberships(
16 member: Member, condition: Callable[[Membership], bool]
17 ) -> Dict[str, Any]:
18 """Determine the group membership of a user based on a condition.
19
20 :return: Object with group memberships
21 """
22 memberships = member.membergroupmembership_set.all()
23 data = {}
24
25 for membership in memberships:
26 if not condition(membership):
27 continue
28 period = {
29 "since": membership.since,
30 "until": membership.until,
31 "chair": membership.chair,
32 }
33
34 if hasattr(membership.group, "board"):
35 period["role"] = membership.role
36
37 if membership.until is None and hasattr(membership.group, "board"):
38 period["until"] = membership.group.board.until
39
40 name = membership.group.name
41 if data.get(name):
42 data[name]["periods"].append(period)
43 if data[name]["earliest"] > period["since"]:
44 data[name]["earliest"] = period["since"]
45 if period["until"] is None or (
46 data[name]["latest"] is not None
47 and data[name]["latest"] < period["until"]
48 ):
49 data[name]["latest"] = period["until"]
50 data[name]["periods"].sort(key=lambda x: x["since"])
51 else:
52 data[name] = {
53 "pk": membership.group.pk,
54 "active": membership.group.active,
55 "name": name,
56 "periods": [period],
57 "url": settings.BASE_URL + membership.group.get_absolute_url(),
58 "earliest": period["since"],
59 "latest": period["until"],
60 }
61 return data
62
63
64 def member_achievements(member) -> List:
65 """Derive a list of achievements of a member.
66
67 Committee and board memberships + mentorships
68 """
69 achievements = _member_group_memberships(
70 member,
71 lambda membership: (
72 hasattr(membership.group, "board") or hasattr(membership.group, "committee")
73 ),
74 )
75
76 mentor_years = member.mentorship_set.all()
77 for mentor_year in mentor_years:
78 name = "Mentor in {}".format(mentor_year.year)
79 # Ensure mentorships appear last but are sorted
80 earliest = date.today()
81 earliest = earliest.replace(year=earliest.year + mentor_year.year)
82 if not achievements.get(name):
83 achievements[name] = {
84 "name": name,
85 "earliest": earliest,
86 }
87 return sorted(achievements.values(), key=lambda x: x["earliest"])
88
89
90 def member_societies(member) -> List:
91 """Derive a list of societies a member was part of."""
92 societies = _member_group_memberships(
93 member, lambda membership: (hasattr(membership.group, "society"))
94 )
95 return sorted(societies.values(), key=lambda x: x["earliest"])
96
97
98 def gen_stats_member_type() -> Dict[str, int]:
99 """Generate a dictionary where every key is a member type with the value being the number of current members of that type."""
100 data = {}
101 for key, display in Membership.MEMBERSHIP_TYPES:
102 data[str(display)] = (
103 Membership.objects.filter(since__lte=date.today())
104 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
105 .filter(type=key)
106 .count()
107 )
108 return data
109
110
111 def gen_stats_year() -> Dict[str, Dict[str, int]]:
112 """Generate list with 6 entries, where each entry represents the total amount of Thalia members in a year.
113
114 The sixth element contains all the multi-year students.
115 """
116 stats_year = {}
117 current_year = datetime_to_lectureyear(date.today())
118
119 for i in range(5):
120 new = {}
121 for key, _ in Membership.MEMBERSHIP_TYPES:
122 new[key] = (
123 Membership.objects.filter(user__profile__starting_year=current_year - i)
124 .filter(since__lte=date.today())
125 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
126 .filter(type=key)
127 .count()
128 )
129 stats_year[str(current_year - i)] = new
130
131 # Add multi year members
132 new = {}
133 for key, _ in Membership.MEMBERSHIP_TYPES:
134 new[key] = (
135 Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)
136 .filter(since__lte=date.today())
137 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
138 .filter(type=key)
139 .count()
140 )
141 stats_year[str(gettext("Older"))] = new
142
143 return stats_year
144
145
146 def verify_email_change(change_request) -> None:
147 """Mark the email change request as verified.
148
149 :param change_request: the email change request
150 """
151 change_request.verified = True
152 change_request.save()
153
154 process_email_change(change_request)
155
156
157 def confirm_email_change(change_request) -> None:
158 """Mark the email change request as verified.
159
160 :param change_request: the email change request
161 """
162 change_request.confirmed = True
163 change_request.save()
164
165 process_email_change(change_request)
166
167
168 def process_email_change(change_request) -> None:
169 """Change the user's email address if the request was completed and send the completion email.
170
171 :param change_request: the email change request
172 """
173 if not change_request.completed:
174 return
175
176 member = change_request.member
177 member.email = change_request.email
178 member.save()
179
180 emails.send_email_change_completion_message(change_request)
181
182
183 def execute_data_minimisation(dry_run=False, members=None) -> List[Member]:
184 """Clean the profiles of members/users of whom the last membership ended at least 31 days ago.
185
186 :param dry_run: does not really remove data if True
187 :param members: queryset of members to process, optional
188 :return: list of processed members
189 """
190 if not members:
191 members = Member.objects
192 members = (
193 members.annotate(membership_count=Count("membership"))
194 .exclude(
195 (
196 Q(membership__until__isnull=True)
197 | Q(membership__until__gt=timezone.now().date())
198 )
199 & Q(membership_count__gt=0)
200 )
201 .distinct()
202 .prefetch_related("membership_set", "profile")
203 )
204 deletion_period = timezone.now().date() - timezone.timedelta(days=31)
205 processed_members = []
206 for member in members:
207 if (
208 member.latest_membership is None
209 or member.latest_membership.until <= deletion_period
210 ):
211 processed_members.append(member)
212 profile = member.profile
213 profile.student_number = None
214 profile.phone_number = None
215 profile.address_street = "<removed> 1"
216 profile.address_street2 = None
217 profile.address_postal_code = "<removed>"
218 profile.address_city = "<removed>"
219 profile.address_country = "NL"
220 profile.birthday = datetime(1900, 1, 1)
221 profile.emergency_contact_phone_number = None
222 profile.emergency_contact = None
223 member.bank_accounts.all().delete()
224 if not dry_run:
225 profile.save()
226
227 return processed_members
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/members/services.py b/website/members/services.py
--- a/website/members/services.py
+++ b/website/members/services.py
@@ -79,6 +79,9 @@
# Ensure mentorships appear last but are sorted
earliest = date.today()
earliest = earliest.replace(year=earliest.year + mentor_year.year)
+ # Making sure it does not crash in leap years
+ if earliest.month == 2 and earliest.day == 29:
+ earliest = earliest.replace(day=28)
if not achievements.get(name):
achievements[name] = {
"name": name,
| {"golden_diff": "diff --git a/website/members/services.py b/website/members/services.py\n--- a/website/members/services.py\n+++ b/website/members/services.py\n@@ -79,6 +79,9 @@\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n+ # Making sure it does not crash in leap years\n+ if earliest.month == 2 and earliest.day == 29:\n+ earliest = earliest.replace(day=28)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n", "issue": "ValueError: day is out of range for month\nIn GitLab by _thaliatechnicie on Mar 4, 2020, 19:20\n\nSentry Issue: [CONCREXIT-24](https://sentry.io/organizations/thalia/issues/1538288408/?referrer=gitlab_integration)\n\n```\nValueError: day is out of range for month\n(11 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 260, in data\n self._data = self.to_representation(self.instance)\n File \"rest_framework/serializers.py\", line 529, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/fields.py\", line 1905, in to_representation\n return method(value)\n File \"members/api/serializers.py\", line 93, in _achievements\n return member_achievements(instance.user)\n File \"members/services.py\", line 72, in member_achievements\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n```\n", "before_files": [{"content": "\"\"\"Services defined in the members package.\"\"\"\nfrom datetime import date, datetime\nfrom typing import Callable, List, Dict, Any\n\nfrom django.conf import settings\nfrom django.db.models import Q, Count\nfrom django.utils import timezone\nfrom django.utils.translation import gettext\n\nfrom members import emails\nfrom members.models import Membership, Member\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> Dict[str, Any]:\n \"\"\"Determine the group membership of a user based on a condition.\n\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > period[\"since\"]:\n data[name][\"earliest\"] = period[\"since\"]\n if period[\"until\"] is None or (\n data[name][\"latest\"] is not None\n and data[name][\"latest\"] < period[\"until\"]\n ):\n data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": period[\"since\"],\n \"latest\": period[\"until\"],\n }\n return data\n\n\ndef member_achievements(member) -> List:\n \"\"\"Derive a list of achievements of a member.\n\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = \"Mentor in {}\".format(mentor_year.year)\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> List:\n \"\"\"Derive a list of societies a member was part of.\"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> Dict[str, int]:\n \"\"\"Generate a dictionary where every key is a member type with the value being the number of current members of that type.\"\"\"\n data = {}\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[str(display)] = (\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n return data\n\n\ndef gen_stats_year() -> Dict[str, Dict[str, int]]:\n \"\"\"Generate list with 6 entries, where each entry represents the total amount of Thalia members in a year.\n\n The sixth element contains all the multi-year students.\n \"\"\"\n stats_year = {}\n current_year = datetime_to_lectureyear(date.today())\n\n for i in range(5):\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year=current_year - i)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(current_year - i)] = new\n\n # Add multi year members\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(gettext(\"Older\"))] = new\n\n return stats_year\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"Change the user's email address if the request was completed and send the completion email.\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> List[Member]:\n \"\"\"Clean the profiles of members/users of whom the last membership ended at least 31 days ago.\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = \"<removed> 1\"\n profile.address_street2 = None\n profile.address_postal_code = \"<removed>\"\n profile.address_city = \"<removed>\"\n profile.address_country = \"NL\"\n profile.birthday = datetime(1900, 1, 1)\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n member.bank_accounts.all().delete()\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}], "after_files": [{"content": "\"\"\"Services defined in the members package.\"\"\"\nfrom datetime import date, datetime\nfrom typing import Callable, List, Dict, Any\n\nfrom django.conf import settings\nfrom django.db.models import Q, Count\nfrom django.utils import timezone\nfrom django.utils.translation import gettext\n\nfrom members import emails\nfrom members.models import Membership, Member\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> Dict[str, Any]:\n \"\"\"Determine the group membership of a user based on a condition.\n\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > period[\"since\"]:\n data[name][\"earliest\"] = period[\"since\"]\n if period[\"until\"] is None or (\n data[name][\"latest\"] is not None\n and data[name][\"latest\"] < period[\"until\"]\n ):\n data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": period[\"since\"],\n \"latest\": period[\"until\"],\n }\n return data\n\n\ndef member_achievements(member) -> List:\n \"\"\"Derive a list of achievements of a member.\n\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = \"Mentor in {}\".format(mentor_year.year)\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n # Making sure it does not crash in leap years\n if earliest.month == 2 and earliest.day == 29:\n earliest = earliest.replace(day=28)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> List:\n \"\"\"Derive a list of societies a member was part of.\"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> Dict[str, int]:\n \"\"\"Generate a dictionary where every key is a member type with the value being the number of current members of that type.\"\"\"\n data = {}\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[str(display)] = (\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n return data\n\n\ndef gen_stats_year() -> Dict[str, Dict[str, int]]:\n \"\"\"Generate list with 6 entries, where each entry represents the total amount of Thalia members in a year.\n\n The sixth element contains all the multi-year students.\n \"\"\"\n stats_year = {}\n current_year = datetime_to_lectureyear(date.today())\n\n for i in range(5):\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year=current_year - i)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(current_year - i)] = new\n\n # Add multi year members\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(gettext(\"Older\"))] = new\n\n return stats_year\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"Mark the email change request as verified.\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"Change the user's email address if the request was completed and send the completion email.\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> List[Member]:\n \"\"\"Clean the profiles of members/users of whom the last membership ended at least 31 days ago.\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = \"<removed> 1\"\n profile.address_street2 = None\n profile.address_postal_code = \"<removed>\"\n profile.address_city = \"<removed>\"\n profile.address_country = \"NL\"\n profile.birthday = datetime(1900, 1, 1)\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n member.bank_accounts.all().delete()\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}]} | 2,734 | 139 |
gh_patches_debug_7422 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-1091 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update CHANGELOG for 0.7.x
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
Updated CHANGELOG according to the reset changes (about last two weeks) especially deprecated items like `data_loader` or `xxxxx_end`
### Additional context
<!-- Add any other context about the problem here. -->
https://github.com/PyTorchLightning/pytorch-lightning/milestone/4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/core/decorators.py`
Content:
```
1 import traceback
2 from functools import wraps
3 import warnings
4
5
6 def data_loader(fn):
7 """Decorator to make any fx with this use the lazy property.
8
9 :param fn:
10 :return:
11 """
12 w = 'data_loader decorator deprecated in 0.7.0. Will remove 0.9.0'
13 warnings.warn(w)
14
15 def inner_fx(self):
16 return fn(self)
17 return inner_fx
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/core/decorators.py b/pytorch_lightning/core/decorators.py
--- a/pytorch_lightning/core/decorators.py
+++ b/pytorch_lightning/core/decorators.py
@@ -6,11 +6,10 @@
def data_loader(fn):
"""Decorator to make any fx with this use the lazy property.
- :param fn:
- :return:
+ Warnings:
+ This decorator deprecated in v0.7.0 and it will be removed v0.9.0.
"""
- w = 'data_loader decorator deprecated in 0.7.0. Will remove 0.9.0'
- warnings.warn(w)
+ warnings.warn('`data_loader` decorator deprecated in v0.7.0. Will be removed v0.9.0', DeprecationWarning)
def inner_fx(self):
return fn(self)
| {"golden_diff": "diff --git a/pytorch_lightning/core/decorators.py b/pytorch_lightning/core/decorators.py\n--- a/pytorch_lightning/core/decorators.py\n+++ b/pytorch_lightning/core/decorators.py\n@@ -6,11 +6,10 @@\n def data_loader(fn):\n \"\"\"Decorator to make any fx with this use the lazy property.\n \n- :param fn:\n- :return:\n+ Warnings:\n+ This decorator deprecated in v0.7.0 and it will be removed v0.9.0.\n \"\"\"\n- w = 'data_loader decorator deprecated in 0.7.0. Will remove 0.9.0'\n- warnings.warn(w)\n+ warnings.warn('`data_loader` decorator deprecated in v0.7.0. Will be removed v0.9.0', DeprecationWarning)\n \n def inner_fx(self):\n return fn(self)\n", "issue": "Update CHANGELOG for 0.7.x\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nUpdated CHANGELOG according to the reset changes (about last two weeks) especially deprecated items like `data_loader` or `xxxxx_end`\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/milestone/4\n", "before_files": [{"content": "import traceback\nfrom functools import wraps\nimport warnings\n\n\ndef data_loader(fn):\n \"\"\"Decorator to make any fx with this use the lazy property.\n\n :param fn:\n :return:\n \"\"\"\n w = 'data_loader decorator deprecated in 0.7.0. Will remove 0.9.0'\n warnings.warn(w)\n\n def inner_fx(self):\n return fn(self)\n return inner_fx\n", "path": "pytorch_lightning/core/decorators.py"}], "after_files": [{"content": "import traceback\nfrom functools import wraps\nimport warnings\n\n\ndef data_loader(fn):\n \"\"\"Decorator to make any fx with this use the lazy property.\n\n Warnings:\n This decorator deprecated in v0.7.0 and it will be removed v0.9.0.\n \"\"\"\n warnings.warn('`data_loader` decorator deprecated in v0.7.0. Will be removed v0.9.0', DeprecationWarning)\n\n def inner_fx(self):\n return fn(self)\n return inner_fx\n", "path": "pytorch_lightning/core/decorators.py"}]} | 473 | 200 |
gh_patches_debug_12714 | rasdani/github-patches | git_diff | pypi__warehouse-12792 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve Alembic story
Fixes #10053.
Adds `alembic.ini`.
Runs `black` and `isort` after generating migrations.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/db.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import functools
14 import logging
15
16 import alembic.config
17 import pyramid_retry
18 import sqlalchemy
19 import venusian
20 import zope.sqlalchemy
21
22 from sqlalchemy import event, inspect
23 from sqlalchemy.dialects.postgresql import UUID
24 from sqlalchemy.exc import IntegrityError, OperationalError
25 from sqlalchemy.ext.declarative import declarative_base # type: ignore
26 from sqlalchemy.orm import sessionmaker
27
28 from warehouse.metrics import IMetricsService
29 from warehouse.utils.attrs import make_repr
30
31 __all__ = ["includeme", "metadata", "ModelBase"]
32
33
34 logger = logging.getLogger(__name__)
35
36
37 DEFAULT_ISOLATION = "READ COMMITTED"
38
39
40 # On the surface this might seem wrong, because retrying a request whose data violates
41 # the constraints of the database doesn't seem like a useful endeavor. However what
42 # happens if you have two requests that are trying to insert a row, and that row
43 # contains a unique, user provided value, you can get into a race condition where both
44 # requests check the database, see nothing with that value exists, then both attempt to
45 # insert it. One of the requests will succeed, the other will fail with an
46 # IntegrityError. Retrying the request that failed will then have it see the object
47 # created by the other request, and will have it do the appropriate action in that case.
48 #
49 # The most common way to run into this, is when submitting a form in the browser, if the
50 # user clicks twice in rapid succession, the browser will send two almost identical
51 # requests at basically the same time.
52 #
53 # One possible issue that this raises, is that it will slow down "legitimate"
54 # IntegrityError because they'll have to fail multiple times before they ultimately
55 # fail. We consider this an acceptable trade off, because deterministic IntegrityError
56 # should be caught with proper validation prior to submitting records to the database
57 # anyways.
58 pyramid_retry.mark_error_retryable(IntegrityError)
59
60
61 # A generic wrapper exception that we'll raise when the database isn't available, we
62 # use this so we can catch it later and turn it into a generic 5xx error.
63 class DatabaseNotAvailableError(Exception):
64 ...
65
66
67 class ModelBase:
68 def __repr__(self):
69 inst = inspect(self)
70 self.__repr__ = make_repr(
71 *[c_attr.key for c_attr in inst.mapper.column_attrs], _self=self
72 )
73 return self.__repr__()
74
75
76 # The Global metadata object.
77 metadata = sqlalchemy.MetaData()
78
79
80 # Base class for models using declarative syntax
81 ModelBase = declarative_base(cls=ModelBase, metadata=metadata) # type: ignore
82
83
84 class Model(ModelBase):
85
86 __abstract__ = True
87
88 id = sqlalchemy.Column(
89 UUID(as_uuid=True),
90 primary_key=True,
91 server_default=sqlalchemy.text("gen_random_uuid()"),
92 )
93
94
95 # Create our session class here, this will stay stateless as we'll bind the
96 # engine to each new state we create instead of binding it to the session
97 # class.
98 Session = sessionmaker()
99
100
101 def listens_for(target, identifier, *args, **kwargs):
102 def deco(wrapped):
103 def callback(scanner, _name, wrapped):
104 wrapped = functools.partial(wrapped, scanner.config)
105 event.listen(target, identifier, wrapped, *args, **kwargs)
106
107 venusian.attach(wrapped, callback, category="warehouse")
108
109 return wrapped
110
111 return deco
112
113
114 def _configure_alembic(config):
115 alembic_cfg = alembic.config.Config()
116 alembic_cfg.set_main_option("script_location", "warehouse:migrations")
117 alembic_cfg.set_main_option("url", config.registry.settings["database.url"])
118 return alembic_cfg
119
120
121 def _create_session(request):
122 metrics = request.find_service(IMetricsService, context=None)
123 metrics.increment("warehouse.db.session.start")
124
125 # Create our connection, most likely pulling it from the pool of
126 # connections
127 try:
128 connection = request.registry["sqlalchemy.engine"].connect()
129 except OperationalError:
130 # When we tried to connection to PostgreSQL, our database was not available for
131 # some reason. We're going to log it here and then raise our error. Most likely
132 # this is a transient error that will go away.
133 logger.warning("Got an error connecting to PostgreSQL", exc_info=True)
134 metrics.increment("warehouse.db.session.error", tags=["error_in:connecting"])
135 raise DatabaseNotAvailableError()
136
137 # Now, create a session from our connection
138 session = Session(bind=connection)
139
140 # Register only this particular session with zope.sqlalchemy
141 zope.sqlalchemy.register(session, transaction_manager=request.tm)
142
143 # Setup a callback that will ensure that everything is cleaned up at the
144 # end of our connection.
145 @request.add_finished_callback
146 def cleanup(request):
147 metrics.increment("warehouse.db.session.finished")
148 session.close()
149 connection.close()
150
151 # Check if we're in read-only mode
152 from warehouse.admin.flags import AdminFlag, AdminFlagValue
153
154 flag = session.query(AdminFlag).get(AdminFlagValue.READ_ONLY.value)
155 if flag and flag.enabled:
156 request.tm.doom()
157
158 # Return our session now that it's created and registered
159 return session
160
161
162 def includeme(config):
163 # Add a directive to get an alembic configuration.
164 config.add_directive("alembic_config", _configure_alembic)
165
166 # Create our SQLAlchemy Engine.
167 config.registry["sqlalchemy.engine"] = sqlalchemy.create_engine(
168 config.registry.settings["database.url"],
169 isolation_level=DEFAULT_ISOLATION,
170 pool_size=35,
171 max_overflow=65,
172 pool_timeout=20,
173 )
174
175 # Register our request.db property
176 config.add_request_method(_create_session, name="db", reify=True)
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/db.py b/warehouse/db.py
--- a/warehouse/db.py
+++ b/warehouse/db.py
@@ -115,6 +115,11 @@
alembic_cfg = alembic.config.Config()
alembic_cfg.set_main_option("script_location", "warehouse:migrations")
alembic_cfg.set_main_option("url", config.registry.settings["database.url"])
+ alembic_cfg.set_section_option("post_write_hooks", "hooks", "black, isort")
+ alembic_cfg.set_section_option("post_write_hooks", "black.type", "console_scripts")
+ alembic_cfg.set_section_option("post_write_hooks", "black.entrypoint", "black")
+ alembic_cfg.set_section_option("post_write_hooks", "isort.type", "console_scripts")
+ alembic_cfg.set_section_option("post_write_hooks", "isort.entrypoint", "isort")
return alembic_cfg
| {"golden_diff": "diff --git a/warehouse/db.py b/warehouse/db.py\n--- a/warehouse/db.py\n+++ b/warehouse/db.py\n@@ -115,6 +115,11 @@\n alembic_cfg = alembic.config.Config()\n alembic_cfg.set_main_option(\"script_location\", \"warehouse:migrations\")\n alembic_cfg.set_main_option(\"url\", config.registry.settings[\"database.url\"])\n+ alembic_cfg.set_section_option(\"post_write_hooks\", \"hooks\", \"black, isort\")\n+ alembic_cfg.set_section_option(\"post_write_hooks\", \"black.type\", \"console_scripts\")\n+ alembic_cfg.set_section_option(\"post_write_hooks\", \"black.entrypoint\", \"black\")\n+ alembic_cfg.set_section_option(\"post_write_hooks\", \"isort.type\", \"console_scripts\")\n+ alembic_cfg.set_section_option(\"post_write_hooks\", \"isort.entrypoint\", \"isort\")\n return alembic_cfg\n", "issue": "Improve Alembic story\nFixes #10053.\r\n\r\nAdds `alembic.ini`.\r\nRuns `black` and `isort` after generating migrations.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport functools\nimport logging\n\nimport alembic.config\nimport pyramid_retry\nimport sqlalchemy\nimport venusian\nimport zope.sqlalchemy\n\nfrom sqlalchemy import event, inspect\nfrom sqlalchemy.dialects.postgresql import UUID\nfrom sqlalchemy.exc import IntegrityError, OperationalError\nfrom sqlalchemy.ext.declarative import declarative_base # type: ignore\nfrom sqlalchemy.orm import sessionmaker\n\nfrom warehouse.metrics import IMetricsService\nfrom warehouse.utils.attrs import make_repr\n\n__all__ = [\"includeme\", \"metadata\", \"ModelBase\"]\n\n\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_ISOLATION = \"READ COMMITTED\"\n\n\n# On the surface this might seem wrong, because retrying a request whose data violates\n# the constraints of the database doesn't seem like a useful endeavor. However what\n# happens if you have two requests that are trying to insert a row, and that row\n# contains a unique, user provided value, you can get into a race condition where both\n# requests check the database, see nothing with that value exists, then both attempt to\n# insert it. One of the requests will succeed, the other will fail with an\n# IntegrityError. Retrying the request that failed will then have it see the object\n# created by the other request, and will have it do the appropriate action in that case.\n#\n# The most common way to run into this, is when submitting a form in the browser, if the\n# user clicks twice in rapid succession, the browser will send two almost identical\n# requests at basically the same time.\n#\n# One possible issue that this raises, is that it will slow down \"legitimate\"\n# IntegrityError because they'll have to fail multiple times before they ultimately\n# fail. We consider this an acceptable trade off, because deterministic IntegrityError\n# should be caught with proper validation prior to submitting records to the database\n# anyways.\npyramid_retry.mark_error_retryable(IntegrityError)\n\n\n# A generic wrapper exception that we'll raise when the database isn't available, we\n# use this so we can catch it later and turn it into a generic 5xx error.\nclass DatabaseNotAvailableError(Exception):\n ...\n\n\nclass ModelBase:\n def __repr__(self):\n inst = inspect(self)\n self.__repr__ = make_repr(\n *[c_attr.key for c_attr in inst.mapper.column_attrs], _self=self\n )\n return self.__repr__()\n\n\n# The Global metadata object.\nmetadata = sqlalchemy.MetaData()\n\n\n# Base class for models using declarative syntax\nModelBase = declarative_base(cls=ModelBase, metadata=metadata) # type: ignore\n\n\nclass Model(ModelBase):\n\n __abstract__ = True\n\n id = sqlalchemy.Column(\n UUID(as_uuid=True),\n primary_key=True,\n server_default=sqlalchemy.text(\"gen_random_uuid()\"),\n )\n\n\n# Create our session class here, this will stay stateless as we'll bind the\n# engine to each new state we create instead of binding it to the session\n# class.\nSession = sessionmaker()\n\n\ndef listens_for(target, identifier, *args, **kwargs):\n def deco(wrapped):\n def callback(scanner, _name, wrapped):\n wrapped = functools.partial(wrapped, scanner.config)\n event.listen(target, identifier, wrapped, *args, **kwargs)\n\n venusian.attach(wrapped, callback, category=\"warehouse\")\n\n return wrapped\n\n return deco\n\n\ndef _configure_alembic(config):\n alembic_cfg = alembic.config.Config()\n alembic_cfg.set_main_option(\"script_location\", \"warehouse:migrations\")\n alembic_cfg.set_main_option(\"url\", config.registry.settings[\"database.url\"])\n return alembic_cfg\n\n\ndef _create_session(request):\n metrics = request.find_service(IMetricsService, context=None)\n metrics.increment(\"warehouse.db.session.start\")\n\n # Create our connection, most likely pulling it from the pool of\n # connections\n try:\n connection = request.registry[\"sqlalchemy.engine\"].connect()\n except OperationalError:\n # When we tried to connection to PostgreSQL, our database was not available for\n # some reason. We're going to log it here and then raise our error. Most likely\n # this is a transient error that will go away.\n logger.warning(\"Got an error connecting to PostgreSQL\", exc_info=True)\n metrics.increment(\"warehouse.db.session.error\", tags=[\"error_in:connecting\"])\n raise DatabaseNotAvailableError()\n\n # Now, create a session from our connection\n session = Session(bind=connection)\n\n # Register only this particular session with zope.sqlalchemy\n zope.sqlalchemy.register(session, transaction_manager=request.tm)\n\n # Setup a callback that will ensure that everything is cleaned up at the\n # end of our connection.\n @request.add_finished_callback\n def cleanup(request):\n metrics.increment(\"warehouse.db.session.finished\")\n session.close()\n connection.close()\n\n # Check if we're in read-only mode\n from warehouse.admin.flags import AdminFlag, AdminFlagValue\n\n flag = session.query(AdminFlag).get(AdminFlagValue.READ_ONLY.value)\n if flag and flag.enabled:\n request.tm.doom()\n\n # Return our session now that it's created and registered\n return session\n\n\ndef includeme(config):\n # Add a directive to get an alembic configuration.\n config.add_directive(\"alembic_config\", _configure_alembic)\n\n # Create our SQLAlchemy Engine.\n config.registry[\"sqlalchemy.engine\"] = sqlalchemy.create_engine(\n config.registry.settings[\"database.url\"],\n isolation_level=DEFAULT_ISOLATION,\n pool_size=35,\n max_overflow=65,\n pool_timeout=20,\n )\n\n # Register our request.db property\n config.add_request_method(_create_session, name=\"db\", reify=True)\n", "path": "warehouse/db.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport functools\nimport logging\n\nimport alembic.config\nimport pyramid_retry\nimport sqlalchemy\nimport venusian\nimport zope.sqlalchemy\n\nfrom sqlalchemy import event, inspect\nfrom sqlalchemy.dialects.postgresql import UUID\nfrom sqlalchemy.exc import IntegrityError, OperationalError\nfrom sqlalchemy.ext.declarative import declarative_base # type: ignore\nfrom sqlalchemy.orm import sessionmaker\n\nfrom warehouse.metrics import IMetricsService\nfrom warehouse.utils.attrs import make_repr\n\n__all__ = [\"includeme\", \"metadata\", \"ModelBase\"]\n\n\nlogger = logging.getLogger(__name__)\n\n\nDEFAULT_ISOLATION = \"READ COMMITTED\"\n\n\n# On the surface this might seem wrong, because retrying a request whose data violates\n# the constraints of the database doesn't seem like a useful endeavor. However what\n# happens if you have two requests that are trying to insert a row, and that row\n# contains a unique, user provided value, you can get into a race condition where both\n# requests check the database, see nothing with that value exists, then both attempt to\n# insert it. One of the requests will succeed, the other will fail with an\n# IntegrityError. Retrying the request that failed will then have it see the object\n# created by the other request, and will have it do the appropriate action in that case.\n#\n# The most common way to run into this, is when submitting a form in the browser, if the\n# user clicks twice in rapid succession, the browser will send two almost identical\n# requests at basically the same time.\n#\n# One possible issue that this raises, is that it will slow down \"legitimate\"\n# IntegrityError because they'll have to fail multiple times before they ultimately\n# fail. We consider this an acceptable trade off, because deterministic IntegrityError\n# should be caught with proper validation prior to submitting records to the database\n# anyways.\npyramid_retry.mark_error_retryable(IntegrityError)\n\n\n# A generic wrapper exception that we'll raise when the database isn't available, we\n# use this so we can catch it later and turn it into a generic 5xx error.\nclass DatabaseNotAvailableError(Exception):\n ...\n\n\nclass ModelBase:\n def __repr__(self):\n inst = inspect(self)\n self.__repr__ = make_repr(\n *[c_attr.key for c_attr in inst.mapper.column_attrs], _self=self\n )\n return self.__repr__()\n\n\n# The Global metadata object.\nmetadata = sqlalchemy.MetaData()\n\n\n# Base class for models using declarative syntax\nModelBase = declarative_base(cls=ModelBase, metadata=metadata) # type: ignore\n\n\nclass Model(ModelBase):\n\n __abstract__ = True\n\n id = sqlalchemy.Column(\n UUID(as_uuid=True),\n primary_key=True,\n server_default=sqlalchemy.text(\"gen_random_uuid()\"),\n )\n\n\n# Create our session class here, this will stay stateless as we'll bind the\n# engine to each new state we create instead of binding it to the session\n# class.\nSession = sessionmaker()\n\n\ndef listens_for(target, identifier, *args, **kwargs):\n def deco(wrapped):\n def callback(scanner, _name, wrapped):\n wrapped = functools.partial(wrapped, scanner.config)\n event.listen(target, identifier, wrapped, *args, **kwargs)\n\n venusian.attach(wrapped, callback, category=\"warehouse\")\n\n return wrapped\n\n return deco\n\n\ndef _configure_alembic(config):\n alembic_cfg = alembic.config.Config()\n alembic_cfg.set_main_option(\"script_location\", \"warehouse:migrations\")\n alembic_cfg.set_main_option(\"url\", config.registry.settings[\"database.url\"])\n alembic_cfg.set_section_option(\"post_write_hooks\", \"hooks\", \"black, isort\")\n alembic_cfg.set_section_option(\"post_write_hooks\", \"black.type\", \"console_scripts\")\n alembic_cfg.set_section_option(\"post_write_hooks\", \"black.entrypoint\", \"black\")\n alembic_cfg.set_section_option(\"post_write_hooks\", \"isort.type\", \"console_scripts\")\n alembic_cfg.set_section_option(\"post_write_hooks\", \"isort.entrypoint\", \"isort\")\n return alembic_cfg\n\n\ndef _create_session(request):\n metrics = request.find_service(IMetricsService, context=None)\n metrics.increment(\"warehouse.db.session.start\")\n\n # Create our connection, most likely pulling it from the pool of\n # connections\n try:\n connection = request.registry[\"sqlalchemy.engine\"].connect()\n except OperationalError:\n # When we tried to connection to PostgreSQL, our database was not available for\n # some reason. We're going to log it here and then raise our error. Most likely\n # this is a transient error that will go away.\n logger.warning(\"Got an error connecting to PostgreSQL\", exc_info=True)\n metrics.increment(\"warehouse.db.session.error\", tags=[\"error_in:connecting\"])\n raise DatabaseNotAvailableError()\n\n # Now, create a session from our connection\n session = Session(bind=connection)\n\n # Register only this particular session with zope.sqlalchemy\n zope.sqlalchemy.register(session, transaction_manager=request.tm)\n\n # Setup a callback that will ensure that everything is cleaned up at the\n # end of our connection.\n @request.add_finished_callback\n def cleanup(request):\n metrics.increment(\"warehouse.db.session.finished\")\n session.close()\n connection.close()\n\n # Check if we're in read-only mode\n from warehouse.admin.flags import AdminFlag, AdminFlagValue\n\n flag = session.query(AdminFlag).get(AdminFlagValue.READ_ONLY.value)\n if flag and flag.enabled:\n request.tm.doom()\n\n # Return our session now that it's created and registered\n return session\n\n\ndef includeme(config):\n # Add a directive to get an alembic configuration.\n config.add_directive(\"alembic_config\", _configure_alembic)\n\n # Create our SQLAlchemy Engine.\n config.registry[\"sqlalchemy.engine\"] = sqlalchemy.create_engine(\n config.registry.settings[\"database.url\"],\n isolation_level=DEFAULT_ISOLATION,\n pool_size=35,\n max_overflow=65,\n pool_timeout=20,\n )\n\n # Register our request.db property\n config.add_request_method(_create_session, name=\"db\", reify=True)\n", "path": "warehouse/db.py"}]} | 2,093 | 211 |
gh_patches_debug_38787 | rasdani/github-patches | git_diff | Kinto__kinto-1284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 when creating a new account with a POST and forgetting to put the ID
```
File "kinto/plugins/accounts/views.py", line 112, in process_record
if new[self.model.id_field] != self.request.selected_userid:
KeyError: 'id'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/plugins/accounts/__init__.py`
Content:
```
1 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
2 from pyramid.exceptions import ConfigurationError
3
4
5 def includeme(config):
6 config.add_api_capability(
7 'accounts',
8 description='Manage user accounts.',
9 url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')
10
11 config.scan('kinto.plugins.accounts.views')
12
13 PERMISSIONS_INHERITANCE_TREE[''].update({
14 'account:create': {}
15 })
16 PERMISSIONS_INHERITANCE_TREE['account'] = {
17 'write': {'account': ['write']},
18 'read': {'account': ['write', 'read']}
19 }
20
21 # Add some safety to avoid weird behaviour with basicauth default policy.
22 settings = config.get_settings()
23 auth_policies = settings['multiauth.policies']
24 if 'basicauth' in auth_policies and 'account' in auth_policies:
25 if auth_policies.index('basicauth') < auth_policies.index('account'):
26 error_msg = ("'basicauth' should not be mentioned before 'account' "
27 "in 'multiauth.policies' setting.")
28 raise ConfigurationError(error_msg)
29
```
Path: `kinto/plugins/accounts/views.py`
Content:
```
1 import bcrypt
2 import colander
3 from pyramid import httpexceptions
4 from pyramid.decorator import reify
5 from pyramid.security import Authenticated, Everyone
6 from pyramid.settings import aslist
7
8 from kinto.views import NameGenerator
9 from kinto.core import resource
10 from kinto.core.errors import raise_invalid, http_error
11
12
13 def _extract_posted_body_id(request):
14 try:
15 # Anonymous creation with POST.
16 return request.json['data']['id']
17 except (ValueError, KeyError):
18 # Bad POST data.
19 if request.method.lower() == 'post':
20 error_details = {
21 'name': 'data.id',
22 'description': 'data.id in body: Required'
23 }
24 raise_invalid(request, **error_details)
25 # Anonymous GET
26 error_msg = 'Cannot read accounts.'
27 raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)
28
29
30 class AccountSchema(resource.ResourceSchema):
31 password = colander.SchemaNode(colander.String())
32
33
34 @resource.register()
35 class Account(resource.ShareableResource):
36
37 schema = AccountSchema
38
39 def __init__(self, request, context):
40 # Store if current user is administrator (before accessing get_parent_id())
41 allowed_from_settings = request.registry.settings.get('account_write_principals', [])
42 context.is_administrator = len(set(aslist(allowed_from_settings)) &
43 set(request.prefixed_principals)) > 0
44 # Shortcut to check if current is anonymous (before get_parent_id()).
45 context.is_anonymous = Authenticated not in request.effective_principals
46
47 super().__init__(request, context)
48
49 # Overwrite the current principal set by ShareableResource.
50 if self.model.current_principal == Everyone or context.is_administrator:
51 # Creation is anonymous, but author with write perm is this:
52 # XXX: only works if policy name is account in settings.
53 self.model.current_principal = 'account:{}'.format(self.model.parent_id)
54
55 @reify
56 def id_generator(self):
57 # This generator is used for ID validation.
58 return NameGenerator()
59
60 def get_parent_id(self, request):
61 # The whole challenge here is that we want to isolate what
62 # authenticated users can list, but give access to everything to
63 # administrators.
64 # Plus when anonymous create accounts, we have to set their parent id
65 # to the same value they would obtain when authenticated.
66 if self.context.is_administrator:
67 if self.context.on_collection:
68 # Accounts created by admin should have userid as parent.
69 if request.method.lower() == 'post':
70 return _extract_posted_body_id(request)
71 else:
72 # Admin see all accounts.
73 return '*'
74 else:
75 # No pattern matching for admin on single record.
76 return request.matchdict['id']
77
78 if not self.context.is_anonymous:
79 # Authenticated users see their own account only.
80 return request.selected_userid
81
82 # Anonymous creation with PUT.
83 if 'id' in request.matchdict:
84 return request.matchdict['id']
85
86 return _extract_posted_body_id(request)
87
88 def collection_post(self):
89 result = super(Account, self).collection_post()
90 if self.context.is_anonymous and self.request.response.status_code == 200:
91 error_details = {
92 'message': 'Account ID %r already exists' % result['data']['id']
93 }
94 raise http_error(httpexceptions.HTTPForbidden(), **error_details)
95 return result
96
97 def process_record(self, new, old=None):
98 new = super(Account, self).process_record(new, old)
99
100 # Store password safely in database as str
101 # (bcrypt.hashpw returns base64 bytes).
102 pwd_str = new["password"].encode(encoding='utf-8')
103 hashed = bcrypt.hashpw(pwd_str, bcrypt.gensalt())
104 new["password"] = hashed.decode(encoding='utf-8')
105
106 # Administrators can reach other accounts and anonymous have no
107 # selected_userid. So do not try to enforce.
108 if self.context.is_administrator or self.context.is_anonymous:
109 return new
110
111 # Otherwise, we force the id to match the authenticated username.
112 if new[self.model.id_field] != self.request.selected_userid:
113 error_details = {
114 'name': 'data.id',
115 'description': 'Username and account ID do not match.',
116 }
117 raise_invalid(self.request, **error_details)
118
119 return new
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py
--- a/kinto/plugins/accounts/__init__.py
+++ b/kinto/plugins/accounts/__init__.py
@@ -26,3 +26,24 @@
error_msg = ("'basicauth' should not be mentioned before 'account' "
"in 'multiauth.policies' setting.")
raise ConfigurationError(error_msg)
+
+ # We assume anyone in account_create_principals is to create
+ # accounts for other people.
+ # No one can create accounts for other people unless they are an
+ # "admin", defined as someone matching account_write_principals.
+ # Therefore any account that is in account_create_principals
+ # should be in account_write_principals too.
+ creators = set(settings.get('account_create_principals', '').split())
+ admins = set(settings.get('account_write_principals', '').split())
+ cant_create_anything = creators.difference(admins)
+ # system.Everyone isn't an account.
+ cant_create_anything.discard('system.Everyone')
+ if cant_create_anything:
+ message = ('Configuration has some principals in account_create_principals '
+ 'but not in account_write_principals. These principals will only be '
+ 'able to create their own accounts. This may not be what you want.\n'
+ 'If you want these users to be able to create accounts for other users, '
+ 'add them to account_write_principals.\n'
+ 'Affected users: {}'.format(list(cant_create_anything)))
+
+ raise ConfigurationError(message)
diff --git a/kinto/plugins/accounts/views.py b/kinto/plugins/accounts/views.py
--- a/kinto/plugins/accounts/views.py
+++ b/kinto/plugins/accounts/views.py
@@ -27,6 +27,12 @@
raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)
+class AccountIdGenerator(NameGenerator):
+ """Allow @ signs in account IDs."""
+
+ regexp = r'^[a-zA-Z0-9][.@a-zA-Z0-9_-]*$'
+
+
class AccountSchema(resource.ResourceSchema):
password = colander.SchemaNode(colander.String())
@@ -55,7 +61,7 @@
@reify
def id_generator(self):
# This generator is used for ID validation.
- return NameGenerator()
+ return AccountIdGenerator()
def get_parent_id(self, request):
# The whole challenge here is that we want to isolate what
@@ -108,6 +114,14 @@
if self.context.is_administrator or self.context.is_anonymous:
return new
+ # Do not let accounts be created without usernames.
+ if self.model.id_field not in new:
+ error_details = {
+ 'name': 'data.id',
+ 'description': 'Accounts must have an ID.',
+ }
+ raise_invalid(self.request, **error_details)
+
# Otherwise, we force the id to match the authenticated username.
if new[self.model.id_field] != self.request.selected_userid:
error_details = {
| {"golden_diff": "diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py\n--- a/kinto/plugins/accounts/__init__.py\n+++ b/kinto/plugins/accounts/__init__.py\n@@ -26,3 +26,24 @@\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n+\n+ # We assume anyone in account_create_principals is to create\n+ # accounts for other people.\n+ # No one can create accounts for other people unless they are an\n+ # \"admin\", defined as someone matching account_write_principals.\n+ # Therefore any account that is in account_create_principals\n+ # should be in account_write_principals too.\n+ creators = set(settings.get('account_create_principals', '').split())\n+ admins = set(settings.get('account_write_principals', '').split())\n+ cant_create_anything = creators.difference(admins)\n+ # system.Everyone isn't an account.\n+ cant_create_anything.discard('system.Everyone')\n+ if cant_create_anything:\n+ message = ('Configuration has some principals in account_create_principals '\n+ 'but not in account_write_principals. These principals will only be '\n+ 'able to create their own accounts. This may not be what you want.\\n'\n+ 'If you want these users to be able to create accounts for other users, '\n+ 'add them to account_write_principals.\\n'\n+ 'Affected users: {}'.format(list(cant_create_anything)))\n+\n+ raise ConfigurationError(message)\ndiff --git a/kinto/plugins/accounts/views.py b/kinto/plugins/accounts/views.py\n--- a/kinto/plugins/accounts/views.py\n+++ b/kinto/plugins/accounts/views.py\n@@ -27,6 +27,12 @@\n raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)\n \n \n+class AccountIdGenerator(NameGenerator):\n+ \"\"\"Allow @ signs in account IDs.\"\"\"\n+\n+ regexp = r'^[a-zA-Z0-9][.@a-zA-Z0-9_-]*$'\n+\n+\n class AccountSchema(resource.ResourceSchema):\n password = colander.SchemaNode(colander.String())\n \n@@ -55,7 +61,7 @@\n @reify\n def id_generator(self):\n # This generator is used for ID validation.\n- return NameGenerator()\n+ return AccountIdGenerator()\n \n def get_parent_id(self, request):\n # The whole challenge here is that we want to isolate what\n@@ -108,6 +114,14 @@\n if self.context.is_administrator or self.context.is_anonymous:\n return new\n \n+ # Do not let accounts be created without usernames.\n+ if self.model.id_field not in new:\n+ error_details = {\n+ 'name': 'data.id',\n+ 'description': 'Accounts must have an ID.',\n+ }\n+ raise_invalid(self.request, **error_details)\n+\n # Otherwise, we force the id to match the authenticated username.\n if new[self.model.id_field] != self.request.selected_userid:\n error_details = {\n", "issue": "500 when creating a new account with a POST and forgetting to put the ID\n```\r\n File \"kinto/plugins/accounts/views.py\", line 112, in process_record\r\n if new[self.model.id_field] != self.request.selected_userid:\r\nKeyError: 'id'\r\n```\n", "before_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom pyramid.exceptions import ConfigurationError\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE[''].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n\n # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n auth_policies = settings['multiauth.policies']\n if 'basicauth' in auth_policies and 'account' in auth_policies:\n if auth_policies.index('basicauth') < auth_policies.index('account'):\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n", "path": "kinto/plugins/accounts/__init__.py"}, {"content": "import bcrypt\nimport colander\nfrom pyramid import httpexceptions\nfrom pyramid.decorator import reify\nfrom pyramid.security import Authenticated, Everyone\nfrom pyramid.settings import aslist\n\nfrom kinto.views import NameGenerator\nfrom kinto.core import resource\nfrom kinto.core.errors import raise_invalid, http_error\n\n\ndef _extract_posted_body_id(request):\n try:\n # Anonymous creation with POST.\n return request.json['data']['id']\n except (ValueError, KeyError):\n # Bad POST data.\n if request.method.lower() == 'post':\n error_details = {\n 'name': 'data.id',\n 'description': 'data.id in body: Required'\n }\n raise_invalid(request, **error_details)\n # Anonymous GET\n error_msg = 'Cannot read accounts.'\n raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)\n\n\nclass AccountSchema(resource.ResourceSchema):\n password = colander.SchemaNode(colander.String())\n\n\[email protected]()\nclass Account(resource.ShareableResource):\n\n schema = AccountSchema\n\n def __init__(self, request, context):\n # Store if current user is administrator (before accessing get_parent_id())\n allowed_from_settings = request.registry.settings.get('account_write_principals', [])\n context.is_administrator = len(set(aslist(allowed_from_settings)) &\n set(request.prefixed_principals)) > 0\n # Shortcut to check if current is anonymous (before get_parent_id()).\n context.is_anonymous = Authenticated not in request.effective_principals\n\n super().__init__(request, context)\n\n # Overwrite the current principal set by ShareableResource.\n if self.model.current_principal == Everyone or context.is_administrator:\n # Creation is anonymous, but author with write perm is this:\n # XXX: only works if policy name is account in settings.\n self.model.current_principal = 'account:{}'.format(self.model.parent_id)\n\n @reify\n def id_generator(self):\n # This generator is used for ID validation.\n return NameGenerator()\n\n def get_parent_id(self, request):\n # The whole challenge here is that we want to isolate what\n # authenticated users can list, but give access to everything to\n # administrators.\n # Plus when anonymous create accounts, we have to set their parent id\n # to the same value they would obtain when authenticated.\n if self.context.is_administrator:\n if self.context.on_collection:\n # Accounts created by admin should have userid as parent.\n if request.method.lower() == 'post':\n return _extract_posted_body_id(request)\n else:\n # Admin see all accounts.\n return '*'\n else:\n # No pattern matching for admin on single record.\n return request.matchdict['id']\n\n if not self.context.is_anonymous:\n # Authenticated users see their own account only.\n return request.selected_userid\n\n # Anonymous creation with PUT.\n if 'id' in request.matchdict:\n return request.matchdict['id']\n\n return _extract_posted_body_id(request)\n\n def collection_post(self):\n result = super(Account, self).collection_post()\n if self.context.is_anonymous and self.request.response.status_code == 200:\n error_details = {\n 'message': 'Account ID %r already exists' % result['data']['id']\n }\n raise http_error(httpexceptions.HTTPForbidden(), **error_details)\n return result\n\n def process_record(self, new, old=None):\n new = super(Account, self).process_record(new, old)\n\n # Store password safely in database as str\n # (bcrypt.hashpw returns base64 bytes).\n pwd_str = new[\"password\"].encode(encoding='utf-8')\n hashed = bcrypt.hashpw(pwd_str, bcrypt.gensalt())\n new[\"password\"] = hashed.decode(encoding='utf-8')\n\n # Administrators can reach other accounts and anonymous have no\n # selected_userid. So do not try to enforce.\n if self.context.is_administrator or self.context.is_anonymous:\n return new\n\n # Otherwise, we force the id to match the authenticated username.\n if new[self.model.id_field] != self.request.selected_userid:\n error_details = {\n 'name': 'data.id',\n 'description': 'Username and account ID do not match.',\n }\n raise_invalid(self.request, **error_details)\n\n return new\n", "path": "kinto/plugins/accounts/views.py"}], "after_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom pyramid.exceptions import ConfigurationError\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE[''].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n\n # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n auth_policies = settings['multiauth.policies']\n if 'basicauth' in auth_policies and 'account' in auth_policies:\n if auth_policies.index('basicauth') < auth_policies.index('account'):\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n\n # We assume anyone in account_create_principals is to create\n # accounts for other people.\n # No one can create accounts for other people unless they are an\n # \"admin\", defined as someone matching account_write_principals.\n # Therefore any account that is in account_create_principals\n # should be in account_write_principals too.\n creators = set(settings.get('account_create_principals', '').split())\n admins = set(settings.get('account_write_principals', '').split())\n cant_create_anything = creators.difference(admins)\n # system.Everyone isn't an account.\n cant_create_anything.discard('system.Everyone')\n if cant_create_anything:\n message = ('Configuration has some principals in account_create_principals '\n 'but not in account_write_principals. These principals will only be '\n 'able to create their own accounts. This may not be what you want.\\n'\n 'If you want these users to be able to create accounts for other users, '\n 'add them to account_write_principals.\\n'\n 'Affected users: {}'.format(list(cant_create_anything)))\n\n raise ConfigurationError(message)\n", "path": "kinto/plugins/accounts/__init__.py"}, {"content": "import bcrypt\nimport colander\nfrom pyramid import httpexceptions\nfrom pyramid.decorator import reify\nfrom pyramid.security import Authenticated, Everyone\nfrom pyramid.settings import aslist\n\nfrom kinto.views import NameGenerator\nfrom kinto.core import resource\nfrom kinto.core.errors import raise_invalid, http_error\n\n\ndef _extract_posted_body_id(request):\n try:\n # Anonymous creation with POST.\n return request.json['data']['id']\n except (ValueError, KeyError):\n # Bad POST data.\n if request.method.lower() == 'post':\n error_details = {\n 'name': 'data.id',\n 'description': 'data.id in body: Required'\n }\n raise_invalid(request, **error_details)\n # Anonymous GET\n error_msg = 'Cannot read accounts.'\n raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)\n\n\nclass AccountIdGenerator(NameGenerator):\n \"\"\"Allow @ signs in account IDs.\"\"\"\n\n regexp = r'^[a-zA-Z0-9][.@a-zA-Z0-9_-]*$'\n\n\nclass AccountSchema(resource.ResourceSchema):\n password = colander.SchemaNode(colander.String())\n\n\[email protected]()\nclass Account(resource.ShareableResource):\n\n schema = AccountSchema\n\n def __init__(self, request, context):\n # Store if current user is administrator (before accessing get_parent_id())\n allowed_from_settings = request.registry.settings.get('account_write_principals', [])\n context.is_administrator = len(set(aslist(allowed_from_settings)) &\n set(request.prefixed_principals)) > 0\n # Shortcut to check if current is anonymous (before get_parent_id()).\n context.is_anonymous = Authenticated not in request.effective_principals\n\n super().__init__(request, context)\n\n # Overwrite the current principal set by ShareableResource.\n if self.model.current_principal == Everyone or context.is_administrator:\n # Creation is anonymous, but author with write perm is this:\n # XXX: only works if policy name is account in settings.\n self.model.current_principal = 'account:{}'.format(self.model.parent_id)\n\n @reify\n def id_generator(self):\n # This generator is used for ID validation.\n return AccountIdGenerator()\n\n def get_parent_id(self, request):\n # The whole challenge here is that we want to isolate what\n # authenticated users can list, but give access to everything to\n # administrators.\n # Plus when anonymous create accounts, we have to set their parent id\n # to the same value they would obtain when authenticated.\n if self.context.is_administrator:\n if self.context.on_collection:\n # Accounts created by admin should have userid as parent.\n if request.method.lower() == 'post':\n return _extract_posted_body_id(request)\n else:\n # Admin see all accounts.\n return '*'\n else:\n # No pattern matching for admin on single record.\n return request.matchdict['id']\n\n if not self.context.is_anonymous:\n # Authenticated users see their own account only.\n return request.selected_userid\n\n # Anonymous creation with PUT.\n if 'id' in request.matchdict:\n return request.matchdict['id']\n\n return _extract_posted_body_id(request)\n\n def collection_post(self):\n result = super(Account, self).collection_post()\n if self.context.is_anonymous and self.request.response.status_code == 200:\n error_details = {\n 'message': 'Account ID %r already exists' % result['data']['id']\n }\n raise http_error(httpexceptions.HTTPForbidden(), **error_details)\n return result\n\n def process_record(self, new, old=None):\n new = super(Account, self).process_record(new, old)\n\n # Store password safely in database as str\n # (bcrypt.hashpw returns base64 bytes).\n pwd_str = new[\"password\"].encode(encoding='utf-8')\n hashed = bcrypt.hashpw(pwd_str, bcrypt.gensalt())\n new[\"password\"] = hashed.decode(encoding='utf-8')\n\n # Administrators can reach other accounts and anonymous have no\n # selected_userid. So do not try to enforce.\n if self.context.is_administrator or self.context.is_anonymous:\n return new\n\n # Do not let accounts be created without usernames.\n if self.model.id_field not in new:\n error_details = {\n 'name': 'data.id',\n 'description': 'Accounts must have an ID.',\n }\n raise_invalid(self.request, **error_details)\n\n # Otherwise, we force the id to match the authenticated username.\n if new[self.model.id_field] != self.request.selected_userid:\n error_details = {\n 'name': 'data.id',\n 'description': 'Username and account ID do not match.',\n }\n raise_invalid(self.request, **error_details)\n\n return new\n", "path": "kinto/plugins/accounts/views.py"}]} | 1,834 | 701 |
gh_patches_debug_30666 | rasdani/github-patches | git_diff | streamlit__streamlit-3106 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing "import urllib" in "streamlit hello" mapping/dataframe demo code
The mapping and dataframe demo (`streamlit hello`, select mapping option on left hand size, have "show code" checked) seems to be missing "import urllib" in the code section below the live demo;
The code uses `except urllib.error.URLError as e:` but urllib is never imported; copying and pasting the code into an app does show the import error.
Tested on streamlit 0.78.0, python 3.8.
EDIT 1: make it clearer
EDIT 2: Just realized the same thing happens for the Dataframe demo, edited.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/streamlit/hello/demos.py`
Content:
```
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import urllib.error
16
17
18 def intro():
19 import streamlit as st
20
21 st.sidebar.success("Select a demo above.")
22
23 st.markdown(
24 """
25 Streamlit is an open-source app framework built specifically for
26 Machine Learning and Data Science projects.
27
28 **👈 Select a demo from the dropdown on the left** to see some examples
29 of what Streamlit can do!
30
31 ### Want to learn more?
32
33 - Check out [streamlit.io](https://streamlit.io)
34 - Jump into our [documentation](https://docs.streamlit.io)
35 - Ask a question in our [community
36 forums](https://discuss.streamlit.io)
37
38 ### See more complex demos
39
40 - Use a neural net to [analyze the Udacity Self-driving Car Image
41 Dataset] (https://github.com/streamlit/demo-self-driving)
42 - Explore a [New York City rideshare dataset]
43 (https://github.com/streamlit/demo-uber-nyc-pickups)
44 """
45 )
46
47
48 # Turn off black formatting for this function to present the user with more
49 # compact code.
50 # fmt: off
51 def mapping_demo():
52 import streamlit as st
53 import pandas as pd
54 import pydeck as pdk
55
56 @st.cache
57 def from_data_file(filename):
58 url = (
59 "https://raw.githubusercontent.com/streamlit/"
60 "example-data/master/hello/v1/%s" % filename)
61 return pd.read_json(url)
62
63 try:
64 ALL_LAYERS = {
65 "Bike Rentals": pdk.Layer(
66 "HexagonLayer",
67 data=from_data_file("bike_rental_stats.json"),
68 get_position=["lon", "lat"],
69 radius=200,
70 elevation_scale=4,
71 elevation_range=[0, 1000],
72 extruded=True,
73 ),
74 "Bart Stop Exits": pdk.Layer(
75 "ScatterplotLayer",
76 data=from_data_file("bart_stop_stats.json"),
77 get_position=["lon", "lat"],
78 get_color=[200, 30, 0, 160],
79 get_radius="[exits]",
80 radius_scale=0.05,
81 ),
82 "Bart Stop Names": pdk.Layer(
83 "TextLayer",
84 data=from_data_file("bart_stop_stats.json"),
85 get_position=["lon", "lat"],
86 get_text="name",
87 get_color=[0, 0, 0, 200],
88 get_size=15,
89 get_alignment_baseline="'bottom'",
90 ),
91 "Outbound Flow": pdk.Layer(
92 "ArcLayer",
93 data=from_data_file("bart_path_stats.json"),
94 get_source_position=["lon", "lat"],
95 get_target_position=["lon2", "lat2"],
96 get_source_color=[200, 30, 0, 160],
97 get_target_color=[200, 30, 0, 160],
98 auto_highlight=True,
99 width_scale=0.0001,
100 get_width="outbound",
101 width_min_pixels=3,
102 width_max_pixels=30,
103 ),
104 }
105 st.sidebar.markdown('### Map Layers')
106 selected_layers = [
107 layer for layer_name, layer in ALL_LAYERS.items()
108 if st.sidebar.checkbox(layer_name, True)]
109 if selected_layers:
110 st.pydeck_chart(pdk.Deck(
111 map_style="mapbox://styles/mapbox/light-v9",
112 initial_view_state={"latitude": 37.76,
113 "longitude": -122.4, "zoom": 11, "pitch": 50},
114 layers=selected_layers,
115 ))
116 else:
117 st.error("Please choose at least one layer above.")
118 except urllib.error.URLError as e:
119 st.error("""
120 **This demo requires internet access.**
121
122 Connection error: %s
123 """ % e.reason)
124 # fmt: on
125
126 # Turn off black formatting for this function to present the user with more
127 # compact code.
128 # fmt: off
129
130
131 def fractal_demo():
132 import streamlit as st
133 import numpy as np
134
135 # Interactive Streamlit elements, like these sliders, return their value.
136 # This gives you an extremely simple interaction model.
137 iterations = st.sidebar.slider("Level of detail", 2, 20, 10, 1)
138 separation = st.sidebar.slider("Separation", 0.7, 2.0, 0.7885)
139
140 # Non-interactive elements return a placeholder to their location
141 # in the app. Here we're storing progress_bar to update it later.
142 progress_bar = st.sidebar.progress(0)
143
144 # These two elements will be filled in later, so we create a placeholder
145 # for them using st.empty()
146 frame_text = st.sidebar.empty()
147 image = st.empty()
148
149 m, n, s = 960, 640, 400
150 x = np.linspace(-m / s, m / s, num=m).reshape((1, m))
151 y = np.linspace(-n / s, n / s, num=n).reshape((n, 1))
152
153 for frame_num, a in enumerate(np.linspace(0.0, 4 * np.pi, 100)):
154 # Here were setting value for these two elements.
155 progress_bar.progress(frame_num)
156 frame_text.text("Frame %i/100" % (frame_num + 1))
157
158 # Performing some fractal wizardry.
159 c = separation * np.exp(1j * a)
160 Z = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m))
161 C = np.full((n, m), c)
162 M = np.full((n, m), True, dtype=bool)
163 N = np.zeros((n, m))
164
165 for i in range(iterations):
166 Z[M] = Z[M] * Z[M] + C[M]
167 M[np.abs(Z) > 2] = False
168 N[M] = i
169
170 # Update the image placeholder by calling the image() function on it.
171 image.image(1.0 - (N / N.max()), use_column_width=True)
172
173 # We clear elements by calling empty on them.
174 progress_bar.empty()
175 frame_text.empty()
176
177 # Streamlit widgets automatically run the script from top to bottom. Since
178 # this button is not connected to any other logic, it just causes a plain
179 # rerun.
180 st.button("Re-run")
181
182
183 # fmt: on
184
185 # Turn off black formatting for this function to present the user with more
186 # compact code.
187 # fmt: off
188 def plotting_demo():
189 import streamlit as st
190 import time
191 import numpy as np
192
193 progress_bar = st.sidebar.progress(0)
194 status_text = st.sidebar.empty()
195 last_rows = np.random.randn(1, 1)
196 chart = st.line_chart(last_rows)
197
198 for i in range(1, 101):
199 new_rows = last_rows[-1, :] + np.random.randn(5, 1).cumsum(axis=0)
200 status_text.text("%i%% Complete" % i)
201 chart.add_rows(new_rows)
202 progress_bar.progress(i)
203 last_rows = new_rows
204 time.sleep(0.05)
205
206 progress_bar.empty()
207
208 # Streamlit widgets automatically run the script from top to bottom. Since
209 # this button is not connected to any other logic, it just causes a plain
210 # rerun.
211 st.button("Re-run")
212
213
214 # fmt: on
215
216 # Turn off black formatting for this function to present the user with more
217 # compact code.
218 # fmt: off
219 def data_frame_demo():
220 import streamlit as st
221 import pandas as pd
222 import altair as alt
223
224 @st.cache
225 def get_UN_data():
226 AWS_BUCKET_URL = "https://streamlit-demo-data.s3-us-west-2.amazonaws.com"
227 df = pd.read_csv(AWS_BUCKET_URL + "/agri.csv.gz")
228 return df.set_index("Region")
229
230 try:
231 df = get_UN_data()
232 countries = st.multiselect(
233 "Choose countries", list(df.index), ["China", "United States of America"]
234 )
235 if not countries:
236 st.error("Please select at least one country.")
237 else:
238 data = df.loc[countries]
239 data /= 1000000.0
240 st.write("### Gross Agricultural Production ($B)", data.sort_index())
241
242 data = data.T.reset_index()
243 data = pd.melt(data, id_vars=["index"]).rename(
244 columns={"index": "year", "value": "Gross Agricultural Product ($B)"}
245 )
246 chart = (
247 alt.Chart(data)
248 .mark_area(opacity=0.3)
249 .encode(
250 x="year:T",
251 y=alt.Y("Gross Agricultural Product ($B):Q", stack=None),
252 color="Region:N",
253 )
254 )
255 st.altair_chart(chart, use_container_width=True)
256 except urllib.error.URLError as e:
257 st.error(
258 """
259 **This demo requires internet access.**
260
261 Connection error: %s
262 """
263 % e.reason
264 )
265
266
267 # fmt: on
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/streamlit/hello/demos.py b/lib/streamlit/hello/demos.py
--- a/lib/streamlit/hello/demos.py
+++ b/lib/streamlit/hello/demos.py
@@ -12,8 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import urllib.error
-
def intro():
import streamlit as st
@@ -53,6 +51,8 @@
import pandas as pd
import pydeck as pdk
+ from urllib.error import URLError
+
@st.cache
def from_data_file(filename):
url = (
@@ -115,7 +115,7 @@
))
else:
st.error("Please choose at least one layer above.")
- except urllib.error.URLError as e:
+ except URLError as e:
st.error("""
**This demo requires internet access.**
@@ -221,6 +221,8 @@
import pandas as pd
import altair as alt
+ from urllib.error import URLError
+
@st.cache
def get_UN_data():
AWS_BUCKET_URL = "https://streamlit-demo-data.s3-us-west-2.amazonaws.com"
@@ -253,7 +255,7 @@
)
)
st.altair_chart(chart, use_container_width=True)
- except urllib.error.URLError as e:
+ except URLError as e:
st.error(
"""
**This demo requires internet access.**
| {"golden_diff": "diff --git a/lib/streamlit/hello/demos.py b/lib/streamlit/hello/demos.py\n--- a/lib/streamlit/hello/demos.py\n+++ b/lib/streamlit/hello/demos.py\n@@ -12,8 +12,6 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import urllib.error\n-\n \n def intro():\n import streamlit as st\n@@ -53,6 +51,8 @@\n import pandas as pd\n import pydeck as pdk\n \n+ from urllib.error import URLError\n+\n @st.cache\n def from_data_file(filename):\n url = (\n@@ -115,7 +115,7 @@\n ))\n else:\n st.error(\"Please choose at least one layer above.\")\n- except urllib.error.URLError as e:\n+ except URLError as e:\n st.error(\"\"\"\n **This demo requires internet access.**\n \n@@ -221,6 +221,8 @@\n import pandas as pd\n import altair as alt\n \n+ from urllib.error import URLError\n+\n @st.cache\n def get_UN_data():\n AWS_BUCKET_URL = \"https://streamlit-demo-data.s3-us-west-2.amazonaws.com\"\n@@ -253,7 +255,7 @@\n )\n )\n st.altair_chart(chart, use_container_width=True)\n- except urllib.error.URLError as e:\n+ except URLError as e:\n st.error(\n \"\"\"\n **This demo requires internet access.**\n", "issue": "Missing \"import urllib\" in \"streamlit hello\" mapping/dataframe demo code\nThe mapping and dataframe demo (`streamlit hello`, select mapping option on left hand size, have \"show code\" checked) seems to be missing \"import urllib\" in the code section below the live demo;\r\n\r\nThe code uses `except urllib.error.URLError as e:` but urllib is never imported; copying and pasting the code into an app does show the import error.\r\n\r\nTested on streamlit 0.78.0, python 3.8.\r\n\r\nEDIT 1: make it clearer\r\nEDIT 2: Just realized the same thing happens for the Dataframe demo, edited.\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport urllib.error\n\n\ndef intro():\n import streamlit as st\n\n st.sidebar.success(\"Select a demo above.\")\n\n st.markdown(\n \"\"\"\n Streamlit is an open-source app framework built specifically for\n Machine Learning and Data Science projects.\n\n **\ud83d\udc48 Select a demo from the dropdown on the left** to see some examples\n of what Streamlit can do!\n\n ### Want to learn more?\n\n - Check out [streamlit.io](https://streamlit.io)\n - Jump into our [documentation](https://docs.streamlit.io)\n - Ask a question in our [community\n forums](https://discuss.streamlit.io)\n\n ### See more complex demos\n\n - Use a neural net to [analyze the Udacity Self-driving Car Image\n Dataset] (https://github.com/streamlit/demo-self-driving)\n - Explore a [New York City rideshare dataset]\n (https://github.com/streamlit/demo-uber-nyc-pickups)\n \"\"\"\n )\n\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef mapping_demo():\n import streamlit as st\n import pandas as pd\n import pydeck as pdk\n\n @st.cache\n def from_data_file(filename):\n url = (\n \"https://raw.githubusercontent.com/streamlit/\"\n \"example-data/master/hello/v1/%s\" % filename)\n return pd.read_json(url)\n\n try:\n ALL_LAYERS = {\n \"Bike Rentals\": pdk.Layer(\n \"HexagonLayer\",\n data=from_data_file(\"bike_rental_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n radius=200,\n elevation_scale=4,\n elevation_range=[0, 1000],\n extruded=True,\n ),\n \"Bart Stop Exits\": pdk.Layer(\n \"ScatterplotLayer\",\n data=from_data_file(\"bart_stop_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n get_color=[200, 30, 0, 160],\n get_radius=\"[exits]\",\n radius_scale=0.05,\n ),\n \"Bart Stop Names\": pdk.Layer(\n \"TextLayer\",\n data=from_data_file(\"bart_stop_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n get_text=\"name\",\n get_color=[0, 0, 0, 200],\n get_size=15,\n get_alignment_baseline=\"'bottom'\",\n ),\n \"Outbound Flow\": pdk.Layer(\n \"ArcLayer\",\n data=from_data_file(\"bart_path_stats.json\"),\n get_source_position=[\"lon\", \"lat\"],\n get_target_position=[\"lon2\", \"lat2\"],\n get_source_color=[200, 30, 0, 160],\n get_target_color=[200, 30, 0, 160],\n auto_highlight=True,\n width_scale=0.0001,\n get_width=\"outbound\",\n width_min_pixels=3,\n width_max_pixels=30,\n ),\n }\n st.sidebar.markdown('### Map Layers')\n selected_layers = [\n layer for layer_name, layer in ALL_LAYERS.items()\n if st.sidebar.checkbox(layer_name, True)]\n if selected_layers:\n st.pydeck_chart(pdk.Deck(\n map_style=\"mapbox://styles/mapbox/light-v9\",\n initial_view_state={\"latitude\": 37.76,\n \"longitude\": -122.4, \"zoom\": 11, \"pitch\": 50},\n layers=selected_layers,\n ))\n else:\n st.error(\"Please choose at least one layer above.\")\n except urllib.error.URLError as e:\n st.error(\"\"\"\n **This demo requires internet access.**\n\n Connection error: %s\n \"\"\" % e.reason)\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\n\n\ndef fractal_demo():\n import streamlit as st\n import numpy as np\n\n # Interactive Streamlit elements, like these sliders, return their value.\n # This gives you an extremely simple interaction model.\n iterations = st.sidebar.slider(\"Level of detail\", 2, 20, 10, 1)\n separation = st.sidebar.slider(\"Separation\", 0.7, 2.0, 0.7885)\n\n # Non-interactive elements return a placeholder to their location\n # in the app. Here we're storing progress_bar to update it later.\n progress_bar = st.sidebar.progress(0)\n\n # These two elements will be filled in later, so we create a placeholder\n # for them using st.empty()\n frame_text = st.sidebar.empty()\n image = st.empty()\n\n m, n, s = 960, 640, 400\n x = np.linspace(-m / s, m / s, num=m).reshape((1, m))\n y = np.linspace(-n / s, n / s, num=n).reshape((n, 1))\n\n for frame_num, a in enumerate(np.linspace(0.0, 4 * np.pi, 100)):\n # Here were setting value for these two elements.\n progress_bar.progress(frame_num)\n frame_text.text(\"Frame %i/100\" % (frame_num + 1))\n\n # Performing some fractal wizardry.\n c = separation * np.exp(1j * a)\n Z = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m))\n C = np.full((n, m), c)\n M = np.full((n, m), True, dtype=bool)\n N = np.zeros((n, m))\n\n for i in range(iterations):\n Z[M] = Z[M] * Z[M] + C[M]\n M[np.abs(Z) > 2] = False\n N[M] = i\n\n # Update the image placeholder by calling the image() function on it.\n image.image(1.0 - (N / N.max()), use_column_width=True)\n\n # We clear elements by calling empty on them.\n progress_bar.empty()\n frame_text.empty()\n\n # Streamlit widgets automatically run the script from top to bottom. Since\n # this button is not connected to any other logic, it just causes a plain\n # rerun.\n st.button(\"Re-run\")\n\n\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef plotting_demo():\n import streamlit as st\n import time\n import numpy as np\n\n progress_bar = st.sidebar.progress(0)\n status_text = st.sidebar.empty()\n last_rows = np.random.randn(1, 1)\n chart = st.line_chart(last_rows)\n\n for i in range(1, 101):\n new_rows = last_rows[-1, :] + np.random.randn(5, 1).cumsum(axis=0)\n status_text.text(\"%i%% Complete\" % i)\n chart.add_rows(new_rows)\n progress_bar.progress(i)\n last_rows = new_rows\n time.sleep(0.05)\n\n progress_bar.empty()\n\n # Streamlit widgets automatically run the script from top to bottom. Since\n # this button is not connected to any other logic, it just causes a plain\n # rerun.\n st.button(\"Re-run\")\n\n\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef data_frame_demo():\n import streamlit as st\n import pandas as pd\n import altair as alt\n\n @st.cache\n def get_UN_data():\n AWS_BUCKET_URL = \"https://streamlit-demo-data.s3-us-west-2.amazonaws.com\"\n df = pd.read_csv(AWS_BUCKET_URL + \"/agri.csv.gz\")\n return df.set_index(\"Region\")\n\n try:\n df = get_UN_data()\n countries = st.multiselect(\n \"Choose countries\", list(df.index), [\"China\", \"United States of America\"]\n )\n if not countries:\n st.error(\"Please select at least one country.\")\n else:\n data = df.loc[countries]\n data /= 1000000.0\n st.write(\"### Gross Agricultural Production ($B)\", data.sort_index())\n\n data = data.T.reset_index()\n data = pd.melt(data, id_vars=[\"index\"]).rename(\n columns={\"index\": \"year\", \"value\": \"Gross Agricultural Product ($B)\"}\n )\n chart = (\n alt.Chart(data)\n .mark_area(opacity=0.3)\n .encode(\n x=\"year:T\",\n y=alt.Y(\"Gross Agricultural Product ($B):Q\", stack=None),\n color=\"Region:N\",\n )\n )\n st.altair_chart(chart, use_container_width=True)\n except urllib.error.URLError as e:\n st.error(\n \"\"\"\n **This demo requires internet access.**\n\n Connection error: %s\n \"\"\"\n % e.reason\n )\n\n\n# fmt: on\n", "path": "lib/streamlit/hello/demos.py"}], "after_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\ndef intro():\n import streamlit as st\n\n st.sidebar.success(\"Select a demo above.\")\n\n st.markdown(\n \"\"\"\n Streamlit is an open-source app framework built specifically for\n Machine Learning and Data Science projects.\n\n **\ud83d\udc48 Select a demo from the dropdown on the left** to see some examples\n of what Streamlit can do!\n\n ### Want to learn more?\n\n - Check out [streamlit.io](https://streamlit.io)\n - Jump into our [documentation](https://docs.streamlit.io)\n - Ask a question in our [community\n forums](https://discuss.streamlit.io)\n\n ### See more complex demos\n\n - Use a neural net to [analyze the Udacity Self-driving Car Image\n Dataset] (https://github.com/streamlit/demo-self-driving)\n - Explore a [New York City rideshare dataset]\n (https://github.com/streamlit/demo-uber-nyc-pickups)\n \"\"\"\n )\n\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef mapping_demo():\n import streamlit as st\n import pandas as pd\n import pydeck as pdk\n\n from urllib.error import URLError\n\n @st.cache\n def from_data_file(filename):\n url = (\n \"https://raw.githubusercontent.com/streamlit/\"\n \"example-data/master/hello/v1/%s\" % filename)\n return pd.read_json(url)\n\n try:\n ALL_LAYERS = {\n \"Bike Rentals\": pdk.Layer(\n \"HexagonLayer\",\n data=from_data_file(\"bike_rental_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n radius=200,\n elevation_scale=4,\n elevation_range=[0, 1000],\n extruded=True,\n ),\n \"Bart Stop Exits\": pdk.Layer(\n \"ScatterplotLayer\",\n data=from_data_file(\"bart_stop_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n get_color=[200, 30, 0, 160],\n get_radius=\"[exits]\",\n radius_scale=0.05,\n ),\n \"Bart Stop Names\": pdk.Layer(\n \"TextLayer\",\n data=from_data_file(\"bart_stop_stats.json\"),\n get_position=[\"lon\", \"lat\"],\n get_text=\"name\",\n get_color=[0, 0, 0, 200],\n get_size=15,\n get_alignment_baseline=\"'bottom'\",\n ),\n \"Outbound Flow\": pdk.Layer(\n \"ArcLayer\",\n data=from_data_file(\"bart_path_stats.json\"),\n get_source_position=[\"lon\", \"lat\"],\n get_target_position=[\"lon2\", \"lat2\"],\n get_source_color=[200, 30, 0, 160],\n get_target_color=[200, 30, 0, 160],\n auto_highlight=True,\n width_scale=0.0001,\n get_width=\"outbound\",\n width_min_pixels=3,\n width_max_pixels=30,\n ),\n }\n st.sidebar.markdown('### Map Layers')\n selected_layers = [\n layer for layer_name, layer in ALL_LAYERS.items()\n if st.sidebar.checkbox(layer_name, True)]\n if selected_layers:\n st.pydeck_chart(pdk.Deck(\n map_style=\"mapbox://styles/mapbox/light-v9\",\n initial_view_state={\"latitude\": 37.76,\n \"longitude\": -122.4, \"zoom\": 11, \"pitch\": 50},\n layers=selected_layers,\n ))\n else:\n st.error(\"Please choose at least one layer above.\")\n except URLError as e:\n st.error(\"\"\"\n **This demo requires internet access.**\n\n Connection error: %s\n \"\"\" % e.reason)\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\n\n\ndef fractal_demo():\n import streamlit as st\n import numpy as np\n\n # Interactive Streamlit elements, like these sliders, return their value.\n # This gives you an extremely simple interaction model.\n iterations = st.sidebar.slider(\"Level of detail\", 2, 20, 10, 1)\n separation = st.sidebar.slider(\"Separation\", 0.7, 2.0, 0.7885)\n\n # Non-interactive elements return a placeholder to their location\n # in the app. Here we're storing progress_bar to update it later.\n progress_bar = st.sidebar.progress(0)\n\n # These two elements will be filled in later, so we create a placeholder\n # for them using st.empty()\n frame_text = st.sidebar.empty()\n image = st.empty()\n\n m, n, s = 960, 640, 400\n x = np.linspace(-m / s, m / s, num=m).reshape((1, m))\n y = np.linspace(-n / s, n / s, num=n).reshape((n, 1))\n\n for frame_num, a in enumerate(np.linspace(0.0, 4 * np.pi, 100)):\n # Here were setting value for these two elements.\n progress_bar.progress(frame_num)\n frame_text.text(\"Frame %i/100\" % (frame_num + 1))\n\n # Performing some fractal wizardry.\n c = separation * np.exp(1j * a)\n Z = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m))\n C = np.full((n, m), c)\n M = np.full((n, m), True, dtype=bool)\n N = np.zeros((n, m))\n\n for i in range(iterations):\n Z[M] = Z[M] * Z[M] + C[M]\n M[np.abs(Z) > 2] = False\n N[M] = i\n\n # Update the image placeholder by calling the image() function on it.\n image.image(1.0 - (N / N.max()), use_column_width=True)\n\n # We clear elements by calling empty on them.\n progress_bar.empty()\n frame_text.empty()\n\n # Streamlit widgets automatically run the script from top to bottom. Since\n # this button is not connected to any other logic, it just causes a plain\n # rerun.\n st.button(\"Re-run\")\n\n\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef plotting_demo():\n import streamlit as st\n import time\n import numpy as np\n\n progress_bar = st.sidebar.progress(0)\n status_text = st.sidebar.empty()\n last_rows = np.random.randn(1, 1)\n chart = st.line_chart(last_rows)\n\n for i in range(1, 101):\n new_rows = last_rows[-1, :] + np.random.randn(5, 1).cumsum(axis=0)\n status_text.text(\"%i%% Complete\" % i)\n chart.add_rows(new_rows)\n progress_bar.progress(i)\n last_rows = new_rows\n time.sleep(0.05)\n\n progress_bar.empty()\n\n # Streamlit widgets automatically run the script from top to bottom. Since\n # this button is not connected to any other logic, it just causes a plain\n # rerun.\n st.button(\"Re-run\")\n\n\n# fmt: on\n\n# Turn off black formatting for this function to present the user with more\n# compact code.\n# fmt: off\ndef data_frame_demo():\n import streamlit as st\n import pandas as pd\n import altair as alt\n\n from urllib.error import URLError\n\n @st.cache\n def get_UN_data():\n AWS_BUCKET_URL = \"https://streamlit-demo-data.s3-us-west-2.amazonaws.com\"\n df = pd.read_csv(AWS_BUCKET_URL + \"/agri.csv.gz\")\n return df.set_index(\"Region\")\n\n try:\n df = get_UN_data()\n countries = st.multiselect(\n \"Choose countries\", list(df.index), [\"China\", \"United States of America\"]\n )\n if not countries:\n st.error(\"Please select at least one country.\")\n else:\n data = df.loc[countries]\n data /= 1000000.0\n st.write(\"### Gross Agricultural Production ($B)\", data.sort_index())\n\n data = data.T.reset_index()\n data = pd.melt(data, id_vars=[\"index\"]).rename(\n columns={\"index\": \"year\", \"value\": \"Gross Agricultural Product ($B)\"}\n )\n chart = (\n alt.Chart(data)\n .mark_area(opacity=0.3)\n .encode(\n x=\"year:T\",\n y=alt.Y(\"Gross Agricultural Product ($B):Q\", stack=None),\n color=\"Region:N\",\n )\n )\n st.altair_chart(chart, use_container_width=True)\n except URLError as e:\n st.error(\n \"\"\"\n **This demo requires internet access.**\n\n Connection error: %s\n \"\"\"\n % e.reason\n )\n\n\n# fmt: on\n", "path": "lib/streamlit/hello/demos.py"}]} | 3,266 | 338 |
gh_patches_debug_26991 | rasdani/github-patches | git_diff | fedora-infra__bodhi-1434 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bodhi-push doesn't load the config file before starting the database connection
```bodhi-push``` doesn't load the config file before starting the database connection, which causes it to crash:
```
[bowlofeggs@bodhi-backend01 ~][PROD]$ sudo -u apache bodhi-push --releases 'f26,f25,f24,epel-7,EL-6' --username bowlofeggs
Traceback (most recent call last):
File "/bin/bodhi-push", line 9, in <module>
load_entry_point('bodhi-server==2.5.0', 'console_scripts', 'bodhi-push')()
File "/usr/lib/python2.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/bodhi/server/push.py", line 62, in push
with get_db_factory()() as session:
File "/usr/lib/python2.7/site-packages/bodhi/server/models.py", line 2438, in get_db_factory
engine = engine_from_config(config, 'sqlalchemy.')
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 426, in engine_from_config
url = options.pop('url')
KeyError: 'url'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/__init__.py`
Content:
```
1 # This program is free software; you can redistribute it and/or
2 # modify it under the terms of the GNU General Public License
3 # as published by the Free Software Foundation; either version 2
4 # of the License, or (at your option) any later version.
5 #
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
9 # GNU General Public License for more details.
10 #
11 # You should have received a copy of the GNU General Public License
12 # along with this program; if not, write to the Free Software
13 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
14
15 from collections import defaultdict
16 import logging
17
18 from cornice.validators import DEFAULT_FILTERS
19 from dogpile.cache import make_region
20 from munch import munchify
21 from pyramid.authentication import AuthTktAuthenticationPolicy
22 from pyramid.authorization import ACLAuthorizationPolicy
23 from pyramid.config import Configurator
24 from pyramid.exceptions import HTTPForbidden
25 from pyramid.renderers import JSONP
26 from pyramid.security import unauthenticated_userid
27 from pyramid.settings import asbool
28 from sqlalchemy import engine_from_config, event
29 from sqlalchemy.orm import scoped_session, sessionmaker
30
31 from bodhi.server import bugs, buildsys, ffmarkdown
32
33
34 log = logging.getLogger(__name__)
35
36
37 # TODO -- someday move this externally to "fedora_flavored_markdown"
38 ffmarkdown.inject()
39
40
41 #
42 # Request methods
43 #
44
45 def get_db_session_for_request(request=None):
46 """
47 This function returns a database session that is meant to be used for the given request.
48
49 It handles rolling back or committing the session based on whether an exception occurred or
50 not. To get a database session that's not tied to the request/response cycle, just use the
51 :data:`Session` scoped session in this module.
52
53 Args:
54 request (pyramid.request): The request object to create a session for.
55
56 Returns:
57 sqlalchemy.orm.session.Session: A database session.
58 """
59 session = request.registry.sessionmaker()
60
61 def cleanup(request):
62 """A post-request hook that commits the database changes if no exceptions occurred."""
63 if request.exception is not None:
64 session.rollback()
65 else:
66 session.commit()
67 session.close()
68
69 request.add_finished_callback(cleanup)
70
71 return session
72
73
74 def get_cacheregion(request):
75 region = make_region()
76 region.configure_from_config(request.registry.settings, "dogpile.cache.")
77 return region
78
79
80 def get_user(request):
81 from bodhi.server.models import User
82 userid = unauthenticated_userid(request)
83 if userid is not None:
84 user = request.db.query(User).filter_by(name=unicode(userid)).first()
85 # Why munch? https://github.com/fedora-infra/bodhi/issues/473
86 return munchify(user.__json__(request=request))
87
88
89 def groupfinder(userid, request):
90 from bodhi.server.models import User
91 if request.user:
92 user = User.get(request.user.name, request.db)
93 return ['group:' + group.name for group in user.groups]
94
95
96 def get_koji(request):
97 return buildsys.get_session()
98
99
100 def get_buildinfo(request):
101 """
102 A per-request cache populated by the validators and shared with the views
103 to store frequently used package-specific data, like build tags and ACLs.
104 """
105 return defaultdict(dict)
106
107
108 def get_releases(request):
109 from bodhi.server.models import Release
110 return Release.all_releases(request.db)
111
112
113 #
114 # Cornice filters
115 #
116
117 def exception_filter(response, request):
118 """Log exceptions that get thrown up to cornice"""
119 if isinstance(response, Exception):
120 log.exception('Unhandled exception raised: %r' % response)
121 return response
122
123 DEFAULT_FILTERS.insert(0, exception_filter)
124
125
126 #
127 # Bodhi initialization
128 #
129
130 #: An SQLAlchemy scoped session with an engine configured using the settings in Bodhi's server
131 #: configuration file. Note that you *must* call :func:`initialize_db` before you can use this.
132 Session = scoped_session(sessionmaker())
133
134
135 def initialize_db(config):
136 """
137 Initialize the database using the given configuration.
138
139 This *must* be called before you can use the :data:`Session` object.
140
141 Args:
142 config (dict): The Bodhi server configuration dictionary.
143
144 Returns:
145 sqlalchemy.engine: The database engine created from the configuration.
146 """
147 #: The SQLAlchemy database engine. This is constructed using the value of
148 #: ``DB_URL`` in :data:`config``.
149 engine = engine_from_config(config, 'sqlalchemy.')
150 # When using SQLite we need to make sure foreign keys are enabled:
151 # http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support
152 if config['sqlalchemy.url'].startswith('sqlite:'):
153 event.listen(
154 engine,
155 'connect',
156 lambda db_con, con_record: db_con.execute('PRAGMA foreign_keys=ON')
157 )
158 Session.configure(bind=engine)
159 return engine
160
161
162 def main(global_config, testing=None, session=None, **settings):
163 """ This function returns a WSGI application """
164 # Setup our bugtracker and buildsystem
165 bugs.set_bugtracker()
166 buildsys.setup_buildsystem(settings)
167
168 # Sessions & Caching
169 from pyramid.session import SignedCookieSessionFactory
170 session_factory = SignedCookieSessionFactory(settings['session.secret'])
171
172 # Construct a list of all groups we're interested in
173 default = ' '.join([settings.get(key, '') for key in [
174 'important_groups',
175 'admin_packager_groups',
176 'mandatory_packager_groups',
177 'admin_groups',
178 ]])
179 # pyramid_fas_openid looks for this setting
180 settings['openid.groups'] = settings.get('openid.groups', default).split()
181
182 config = Configurator(settings=settings, session_factory=session_factory)
183
184 # Plugins
185 config.include('pyramid_mako')
186 config.include('cornice')
187
188 # Initialize the database scoped session
189 initialize_db(settings)
190 config.registry.sessionmaker = Session.session_factory
191
192 # Lazy-loaded memoized request properties
193 if session:
194 config.add_request_method(lambda _: session, 'db', reify=True)
195 else:
196 config.add_request_method(get_db_session_for_request, 'db', reify=True)
197
198 config.add_request_method(get_user, 'user', reify=True)
199 config.add_request_method(get_koji, 'koji', reify=True)
200 config.add_request_method(get_cacheregion, 'cache', reify=True)
201 config.add_request_method(get_buildinfo, 'buildinfo', reify=True)
202 config.add_request_method(get_releases, 'releases', reify=True)
203
204 # Templating
205 config.add_mako_renderer('.html', settings_prefix='mako.')
206 config.add_static_view('static', 'bodhi:server/static')
207
208 from bodhi.server.renderers import rss, jpeg
209 config.add_renderer('rss', rss)
210 config.add_renderer('jpeg', jpeg)
211 config.add_renderer('jsonp', JSONP(param_name='callback'))
212
213 # i18n
214 config.add_translation_dirs('bodhi:server/locale/')
215
216 # Authentication & Authorization
217 if testing:
218 # use a permissive security policy while running unit tests
219 config.testing_securitypolicy(userid=testing, permissive=True)
220 else:
221 config.set_authentication_policy(AuthTktAuthenticationPolicy(
222 settings['authtkt.secret'], callback=groupfinder,
223 secure=asbool(settings['authtkt.secure']), hashalg='sha512'))
224 config.set_authorization_policy(ACLAuthorizationPolicy())
225
226 # Frontpage
227 config.add_route('home', '/')
228
229 # Views for creating new objects
230 config.add_route('new_update', '/updates/new')
231 config.add_route('new_override', '/overrides/new')
232 config.add_route('new_stack', '/stacks/new')
233
234 # Metrics
235 config.add_route('metrics', '/metrics')
236 config.add_route('masher_status', '/masher/')
237
238 # Auto-completion search
239 config.add_route('search_packages', '/search/packages')
240 config.add_route('latest_candidates', '/latest_candidates')
241 config.add_route('latest_builds', '/latest_builds')
242
243 config.add_route('captcha_image', '/captcha/{cipherkey}/')
244
245 # pyramid.openid
246 config.add_route('login', '/login')
247 config.add_view('bodhi.server.security.login', route_name='login')
248 config.add_view('bodhi.server.security.login', context=HTTPForbidden)
249 config.add_route('logout', '/logout')
250 config.add_view('bodhi.server.security.logout', route_name='logout')
251 config.add_route('verify_openid', pattern='/dologin.html')
252 config.add_view('pyramid_fas_openid.verify_openid', route_name='verify_openid')
253
254 config.add_route('api_version', '/api_version')
255
256 # The only user preference we have.
257 config.add_route('popup_toggle', '/popup_toggle')
258
259 config.scan('bodhi.server.views')
260 config.scan('bodhi.server.services')
261 config.scan('bodhi.server.captcha')
262
263 return config.make_wsgi_app()
264
```
Path: `bodhi/server/config.py`
Content:
```
1 # This program is free software; you can redistribute it and/or
2 # modify it under the terms of the GNU General Public License
3 # as published by the Free Software Foundation; either version 2
4 # of the License, or (at your option) any later version.
5 #
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
9 # GNU General Public License for more details.
10 #
11 # You should have received a copy of the GNU General Public License
12 # along with this program; if not, write to the Free Software
13 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
14
15 import os
16 import logging
17
18 from pyramid.paster import get_appsettings
19
20 log = logging.getLogger('bodhi')
21
22
23 def get_configfile():
24 configfile = None
25 setupdir = os.path.join(os.path.dirname(os.path.dirname(__file__)), '..')
26 if configfile:
27 if not os.path.exists(configfile):
28 log.error("Cannot find config: %s" % configfile)
29 return
30 else:
31 for cfg in (os.path.join(setupdir, 'development.ini'),
32 '/etc/bodhi/production.ini'):
33 if os.path.exists(cfg):
34 configfile = cfg
35 break
36 else:
37 log.error("Unable to find configuration to load!")
38 return configfile
39
40
41 class BodhiConfig(dict):
42 loaded = False
43
44 def __getitem__(self, *args, **kw):
45 if not self.loaded:
46 self.load_config()
47 return super(BodhiConfig, self).__getitem__(*args, **kw)
48
49 def get(self, *args, **kw):
50 if not self.loaded:
51 self.load_config()
52 return super(BodhiConfig, self).get(*args, **kw)
53
54 def load_config(self):
55 configfile = get_configfile()
56 self.update(get_appsettings(configfile))
57 self.loaded = True
58
59
60 config = BodhiConfig()
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bodhi/server/__init__.py b/bodhi/server/__init__.py
--- a/bodhi/server/__init__.py
+++ b/bodhi/server/__init__.py
@@ -144,9 +144,10 @@
Returns:
sqlalchemy.engine: The database engine created from the configuration.
"""
- #: The SQLAlchemy database engine. This is constructed using the value of
- #: ``DB_URL`` in :data:`config``.
- engine = engine_from_config(config, 'sqlalchemy.')
+ # The SQLAlchemy database engine. This is constructed using the value of
+ # ``DB_URL`` in :data:`config``. Note: A copy is provided since ``engine_from_config``
+ # uses ``pop``.
+ engine = engine_from_config(config.copy(), 'sqlalchemy.')
# When using SQLite we need to make sure foreign keys are enabled:
# http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support
if config['sqlalchemy.url'].startswith('sqlite:'):
diff --git a/bodhi/server/config.py b/bodhi/server/config.py
--- a/bodhi/server/config.py
+++ b/bodhi/server/config.py
@@ -51,6 +51,16 @@
self.load_config()
return super(BodhiConfig, self).get(*args, **kw)
+ def pop(self, *args, **kw):
+ if not self.loaded:
+ self.load_config()
+ return super(BodhiConfig, self).pop(*args, **kw)
+
+ def copy(self, *args, **kw):
+ if not self.loaded:
+ self.load_config()
+ return super(BodhiConfig, self).copy(*args, **kw)
+
def load_config(self):
configfile = get_configfile()
self.update(get_appsettings(configfile))
| {"golden_diff": "diff --git a/bodhi/server/__init__.py b/bodhi/server/__init__.py\n--- a/bodhi/server/__init__.py\n+++ b/bodhi/server/__init__.py\n@@ -144,9 +144,10 @@\n Returns:\n sqlalchemy.engine: The database engine created from the configuration.\n \"\"\"\n- #: The SQLAlchemy database engine. This is constructed using the value of\n- #: ``DB_URL`` in :data:`config``.\n- engine = engine_from_config(config, 'sqlalchemy.')\n+ # The SQLAlchemy database engine. This is constructed using the value of\n+ # ``DB_URL`` in :data:`config``. Note: A copy is provided since ``engine_from_config``\n+ # uses ``pop``.\n+ engine = engine_from_config(config.copy(), 'sqlalchemy.')\n # When using SQLite we need to make sure foreign keys are enabled:\n # http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support\n if config['sqlalchemy.url'].startswith('sqlite:'):\ndiff --git a/bodhi/server/config.py b/bodhi/server/config.py\n--- a/bodhi/server/config.py\n+++ b/bodhi/server/config.py\n@@ -51,6 +51,16 @@\n self.load_config()\n return super(BodhiConfig, self).get(*args, **kw)\n \n+ def pop(self, *args, **kw):\n+ if not self.loaded:\n+ self.load_config()\n+ return super(BodhiConfig, self).pop(*args, **kw)\n+\n+ def copy(self, *args, **kw):\n+ if not self.loaded:\n+ self.load_config()\n+ return super(BodhiConfig, self).copy(*args, **kw)\n+\n def load_config(self):\n configfile = get_configfile()\n self.update(get_appsettings(configfile))\n", "issue": "bodhi-push doesn't load the config file before starting the database connection\n```bodhi-push``` doesn't load the config file before starting the database connection, which causes it to crash:\r\n\r\n```\r\n[bowlofeggs@bodhi-backend01 ~][PROD]$ sudo -u apache bodhi-push --releases 'f26,f25,f24,epel-7,EL-6' --username bowlofeggs\r\nTraceback (most recent call last):\r\n File \"/bin/bodhi-push\", line 9, in <module>\r\n load_entry_point('bodhi-server==2.5.0', 'console_scripts', 'bodhi-push')()\r\n File \"/usr/lib/python2.7/site-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/lib/python2.7/site-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/lib/python2.7/site-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/lib/python2.7/site-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/lib/python2.7/site-packages/bodhi/server/push.py\", line 62, in push\r\n with get_db_factory()() as session:\r\n File \"/usr/lib/python2.7/site-packages/bodhi/server/models.py\", line 2438, in get_db_factory\r\n engine = engine_from_config(config, 'sqlalchemy.')\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/__init__.py\", line 426, in engine_from_config\r\n url = options.pop('url')\r\nKeyError: 'url'\r\n```\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nfrom collections import defaultdict\nimport logging\n\nfrom cornice.validators import DEFAULT_FILTERS\nfrom dogpile.cache import make_region\nfrom munch import munchify\nfrom pyramid.authentication import AuthTktAuthenticationPolicy\nfrom pyramid.authorization import ACLAuthorizationPolicy\nfrom pyramid.config import Configurator\nfrom pyramid.exceptions import HTTPForbidden\nfrom pyramid.renderers import JSONP\nfrom pyramid.security import unauthenticated_userid\nfrom pyramid.settings import asbool\nfrom sqlalchemy import engine_from_config, event\nfrom sqlalchemy.orm import scoped_session, sessionmaker\n\nfrom bodhi.server import bugs, buildsys, ffmarkdown\n\n\nlog = logging.getLogger(__name__)\n\n\n# TODO -- someday move this externally to \"fedora_flavored_markdown\"\nffmarkdown.inject()\n\n\n#\n# Request methods\n#\n\ndef get_db_session_for_request(request=None):\n \"\"\"\n This function returns a database session that is meant to be used for the given request.\n\n It handles rolling back or committing the session based on whether an exception occurred or\n not. To get a database session that's not tied to the request/response cycle, just use the\n :data:`Session` scoped session in this module.\n\n Args:\n request (pyramid.request): The request object to create a session for.\n\n Returns:\n sqlalchemy.orm.session.Session: A database session.\n \"\"\"\n session = request.registry.sessionmaker()\n\n def cleanup(request):\n \"\"\"A post-request hook that commits the database changes if no exceptions occurred.\"\"\"\n if request.exception is not None:\n session.rollback()\n else:\n session.commit()\n session.close()\n\n request.add_finished_callback(cleanup)\n\n return session\n\n\ndef get_cacheregion(request):\n region = make_region()\n region.configure_from_config(request.registry.settings, \"dogpile.cache.\")\n return region\n\n\ndef get_user(request):\n from bodhi.server.models import User\n userid = unauthenticated_userid(request)\n if userid is not None:\n user = request.db.query(User).filter_by(name=unicode(userid)).first()\n # Why munch? https://github.com/fedora-infra/bodhi/issues/473\n return munchify(user.__json__(request=request))\n\n\ndef groupfinder(userid, request):\n from bodhi.server.models import User\n if request.user:\n user = User.get(request.user.name, request.db)\n return ['group:' + group.name for group in user.groups]\n\n\ndef get_koji(request):\n return buildsys.get_session()\n\n\ndef get_buildinfo(request):\n \"\"\"\n A per-request cache populated by the validators and shared with the views\n to store frequently used package-specific data, like build tags and ACLs.\n \"\"\"\n return defaultdict(dict)\n\n\ndef get_releases(request):\n from bodhi.server.models import Release\n return Release.all_releases(request.db)\n\n\n#\n# Cornice filters\n#\n\ndef exception_filter(response, request):\n \"\"\"Log exceptions that get thrown up to cornice\"\"\"\n if isinstance(response, Exception):\n log.exception('Unhandled exception raised: %r' % response)\n return response\n\nDEFAULT_FILTERS.insert(0, exception_filter)\n\n\n#\n# Bodhi initialization\n#\n\n#: An SQLAlchemy scoped session with an engine configured using the settings in Bodhi's server\n#: configuration file. Note that you *must* call :func:`initialize_db` before you can use this.\nSession = scoped_session(sessionmaker())\n\n\ndef initialize_db(config):\n \"\"\"\n Initialize the database using the given configuration.\n\n This *must* be called before you can use the :data:`Session` object.\n\n Args:\n config (dict): The Bodhi server configuration dictionary.\n\n Returns:\n sqlalchemy.engine: The database engine created from the configuration.\n \"\"\"\n #: The SQLAlchemy database engine. This is constructed using the value of\n #: ``DB_URL`` in :data:`config``.\n engine = engine_from_config(config, 'sqlalchemy.')\n # When using SQLite we need to make sure foreign keys are enabled:\n # http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support\n if config['sqlalchemy.url'].startswith('sqlite:'):\n event.listen(\n engine,\n 'connect',\n lambda db_con, con_record: db_con.execute('PRAGMA foreign_keys=ON')\n )\n Session.configure(bind=engine)\n return engine\n\n\ndef main(global_config, testing=None, session=None, **settings):\n \"\"\" This function returns a WSGI application \"\"\"\n # Setup our bugtracker and buildsystem\n bugs.set_bugtracker()\n buildsys.setup_buildsystem(settings)\n\n # Sessions & Caching\n from pyramid.session import SignedCookieSessionFactory\n session_factory = SignedCookieSessionFactory(settings['session.secret'])\n\n # Construct a list of all groups we're interested in\n default = ' '.join([settings.get(key, '') for key in [\n 'important_groups',\n 'admin_packager_groups',\n 'mandatory_packager_groups',\n 'admin_groups',\n ]])\n # pyramid_fas_openid looks for this setting\n settings['openid.groups'] = settings.get('openid.groups', default).split()\n\n config = Configurator(settings=settings, session_factory=session_factory)\n\n # Plugins\n config.include('pyramid_mako')\n config.include('cornice')\n\n # Initialize the database scoped session\n initialize_db(settings)\n config.registry.sessionmaker = Session.session_factory\n\n # Lazy-loaded memoized request properties\n if session:\n config.add_request_method(lambda _: session, 'db', reify=True)\n else:\n config.add_request_method(get_db_session_for_request, 'db', reify=True)\n\n config.add_request_method(get_user, 'user', reify=True)\n config.add_request_method(get_koji, 'koji', reify=True)\n config.add_request_method(get_cacheregion, 'cache', reify=True)\n config.add_request_method(get_buildinfo, 'buildinfo', reify=True)\n config.add_request_method(get_releases, 'releases', reify=True)\n\n # Templating\n config.add_mako_renderer('.html', settings_prefix='mako.')\n config.add_static_view('static', 'bodhi:server/static')\n\n from bodhi.server.renderers import rss, jpeg\n config.add_renderer('rss', rss)\n config.add_renderer('jpeg', jpeg)\n config.add_renderer('jsonp', JSONP(param_name='callback'))\n\n # i18n\n config.add_translation_dirs('bodhi:server/locale/')\n\n # Authentication & Authorization\n if testing:\n # use a permissive security policy while running unit tests\n config.testing_securitypolicy(userid=testing, permissive=True)\n else:\n config.set_authentication_policy(AuthTktAuthenticationPolicy(\n settings['authtkt.secret'], callback=groupfinder,\n secure=asbool(settings['authtkt.secure']), hashalg='sha512'))\n config.set_authorization_policy(ACLAuthorizationPolicy())\n\n # Frontpage\n config.add_route('home', '/')\n\n # Views for creating new objects\n config.add_route('new_update', '/updates/new')\n config.add_route('new_override', '/overrides/new')\n config.add_route('new_stack', '/stacks/new')\n\n # Metrics\n config.add_route('metrics', '/metrics')\n config.add_route('masher_status', '/masher/')\n\n # Auto-completion search\n config.add_route('search_packages', '/search/packages')\n config.add_route('latest_candidates', '/latest_candidates')\n config.add_route('latest_builds', '/latest_builds')\n\n config.add_route('captcha_image', '/captcha/{cipherkey}/')\n\n # pyramid.openid\n config.add_route('login', '/login')\n config.add_view('bodhi.server.security.login', route_name='login')\n config.add_view('bodhi.server.security.login', context=HTTPForbidden)\n config.add_route('logout', '/logout')\n config.add_view('bodhi.server.security.logout', route_name='logout')\n config.add_route('verify_openid', pattern='/dologin.html')\n config.add_view('pyramid_fas_openid.verify_openid', route_name='verify_openid')\n\n config.add_route('api_version', '/api_version')\n\n # The only user preference we have.\n config.add_route('popup_toggle', '/popup_toggle')\n\n config.scan('bodhi.server.views')\n config.scan('bodhi.server.services')\n config.scan('bodhi.server.captcha')\n\n return config.make_wsgi_app()\n", "path": "bodhi/server/__init__.py"}, {"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nimport os\nimport logging\n\nfrom pyramid.paster import get_appsettings\n\nlog = logging.getLogger('bodhi')\n\n\ndef get_configfile():\n configfile = None\n setupdir = os.path.join(os.path.dirname(os.path.dirname(__file__)), '..')\n if configfile:\n if not os.path.exists(configfile):\n log.error(\"Cannot find config: %s\" % configfile)\n return\n else:\n for cfg in (os.path.join(setupdir, 'development.ini'),\n '/etc/bodhi/production.ini'):\n if os.path.exists(cfg):\n configfile = cfg\n break\n else:\n log.error(\"Unable to find configuration to load!\")\n return configfile\n\n\nclass BodhiConfig(dict):\n loaded = False\n\n def __getitem__(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).__getitem__(*args, **kw)\n\n def get(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).get(*args, **kw)\n\n def load_config(self):\n configfile = get_configfile()\n self.update(get_appsettings(configfile))\n self.loaded = True\n\n\nconfig = BodhiConfig()\n", "path": "bodhi/server/config.py"}], "after_files": [{"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nfrom collections import defaultdict\nimport logging\n\nfrom cornice.validators import DEFAULT_FILTERS\nfrom dogpile.cache import make_region\nfrom munch import munchify\nfrom pyramid.authentication import AuthTktAuthenticationPolicy\nfrom pyramid.authorization import ACLAuthorizationPolicy\nfrom pyramid.config import Configurator\nfrom pyramid.exceptions import HTTPForbidden\nfrom pyramid.renderers import JSONP\nfrom pyramid.security import unauthenticated_userid\nfrom pyramid.settings import asbool\nfrom sqlalchemy import engine_from_config, event\nfrom sqlalchemy.orm import scoped_session, sessionmaker\n\nfrom bodhi.server import bugs, buildsys, ffmarkdown\n\n\nlog = logging.getLogger(__name__)\n\n\n# TODO -- someday move this externally to \"fedora_flavored_markdown\"\nffmarkdown.inject()\n\n\n#\n# Request methods\n#\n\ndef get_db_session_for_request(request=None):\n \"\"\"\n This function returns a database session that is meant to be used for the given request.\n\n It handles rolling back or committing the session based on whether an exception occurred or\n not. To get a database session that's not tied to the request/response cycle, just use the\n :data:`Session` scoped session in this module.\n\n Args:\n request (pyramid.request): The request object to create a session for.\n\n Returns:\n sqlalchemy.orm.session.Session: A database session.\n \"\"\"\n session = request.registry.sessionmaker()\n\n def cleanup(request):\n \"\"\"A post-request hook that commits the database changes if no exceptions occurred.\"\"\"\n if request.exception is not None:\n session.rollback()\n else:\n session.commit()\n session.close()\n\n request.add_finished_callback(cleanup)\n\n return session\n\n\ndef get_cacheregion(request):\n region = make_region()\n region.configure_from_config(request.registry.settings, \"dogpile.cache.\")\n return region\n\n\ndef get_user(request):\n from bodhi.server.models import User\n userid = unauthenticated_userid(request)\n if userid is not None:\n user = request.db.query(User).filter_by(name=unicode(userid)).first()\n # Why munch? https://github.com/fedora-infra/bodhi/issues/473\n return munchify(user.__json__(request=request))\n\n\ndef groupfinder(userid, request):\n from bodhi.server.models import User\n if request.user:\n user = User.get(request.user.name, request.db)\n return ['group:' + group.name for group in user.groups]\n\n\ndef get_koji(request):\n return buildsys.get_session()\n\n\ndef get_buildinfo(request):\n \"\"\"\n A per-request cache populated by the validators and shared with the views\n to store frequently used package-specific data, like build tags and ACLs.\n \"\"\"\n return defaultdict(dict)\n\n\ndef get_releases(request):\n from bodhi.server.models import Release\n return Release.all_releases(request.db)\n\n\n#\n# Cornice filters\n#\n\ndef exception_filter(response, request):\n \"\"\"Log exceptions that get thrown up to cornice\"\"\"\n if isinstance(response, Exception):\n log.exception('Unhandled exception raised: %r' % response)\n return response\n\nDEFAULT_FILTERS.insert(0, exception_filter)\n\n\n#\n# Bodhi initialization\n#\n\n#: An SQLAlchemy scoped session with an engine configured using the settings in Bodhi's server\n#: configuration file. Note that you *must* call :func:`initialize_db` before you can use this.\nSession = scoped_session(sessionmaker())\n\n\ndef initialize_db(config):\n \"\"\"\n Initialize the database using the given configuration.\n\n This *must* be called before you can use the :data:`Session` object.\n\n Args:\n config (dict): The Bodhi server configuration dictionary.\n\n Returns:\n sqlalchemy.engine: The database engine created from the configuration.\n \"\"\"\n # The SQLAlchemy database engine. This is constructed using the value of\n # ``DB_URL`` in :data:`config``. Note: A copy is provided since ``engine_from_config``\n # uses ``pop``.\n engine = engine_from_config(config.copy(), 'sqlalchemy.')\n # When using SQLite we need to make sure foreign keys are enabled:\n # http://docs.sqlalchemy.org/en/latest/dialects/sqlite.html#foreign-key-support\n if config['sqlalchemy.url'].startswith('sqlite:'):\n event.listen(\n engine,\n 'connect',\n lambda db_con, con_record: db_con.execute('PRAGMA foreign_keys=ON')\n )\n Session.configure(bind=engine)\n return engine\n\n\ndef main(global_config, testing=None, session=None, **settings):\n \"\"\" This function returns a WSGI application \"\"\"\n # Setup our bugtracker and buildsystem\n bugs.set_bugtracker()\n buildsys.setup_buildsystem(settings)\n\n # Sessions & Caching\n from pyramid.session import SignedCookieSessionFactory\n session_factory = SignedCookieSessionFactory(settings['session.secret'])\n\n # Construct a list of all groups we're interested in\n default = ' '.join([settings.get(key, '') for key in [\n 'important_groups',\n 'admin_packager_groups',\n 'mandatory_packager_groups',\n 'admin_groups',\n ]])\n # pyramid_fas_openid looks for this setting\n settings['openid.groups'] = settings.get('openid.groups', default).split()\n\n config = Configurator(settings=settings, session_factory=session_factory)\n\n # Plugins\n config.include('pyramid_mako')\n config.include('cornice')\n\n # Initialize the database scoped session\n initialize_db(settings)\n config.registry.sessionmaker = Session.session_factory\n\n # Lazy-loaded memoized request properties\n if session:\n config.add_request_method(lambda _: session, 'db', reify=True)\n else:\n config.add_request_method(get_db_session_for_request, 'db', reify=True)\n\n config.add_request_method(get_user, 'user', reify=True)\n config.add_request_method(get_koji, 'koji', reify=True)\n config.add_request_method(get_cacheregion, 'cache', reify=True)\n config.add_request_method(get_buildinfo, 'buildinfo', reify=True)\n config.add_request_method(get_releases, 'releases', reify=True)\n\n # Templating\n config.add_mako_renderer('.html', settings_prefix='mako.')\n config.add_static_view('static', 'bodhi:server/static')\n\n from bodhi.server.renderers import rss, jpeg\n config.add_renderer('rss', rss)\n config.add_renderer('jpeg', jpeg)\n config.add_renderer('jsonp', JSONP(param_name='callback'))\n\n # i18n\n config.add_translation_dirs('bodhi:server/locale/')\n\n # Authentication & Authorization\n if testing:\n # use a permissive security policy while running unit tests\n config.testing_securitypolicy(userid=testing, permissive=True)\n else:\n config.set_authentication_policy(AuthTktAuthenticationPolicy(\n settings['authtkt.secret'], callback=groupfinder,\n secure=asbool(settings['authtkt.secure']), hashalg='sha512'))\n config.set_authorization_policy(ACLAuthorizationPolicy())\n\n # Frontpage\n config.add_route('home', '/')\n\n # Views for creating new objects\n config.add_route('new_update', '/updates/new')\n config.add_route('new_override', '/overrides/new')\n config.add_route('new_stack', '/stacks/new')\n\n # Metrics\n config.add_route('metrics', '/metrics')\n config.add_route('masher_status', '/masher/')\n\n # Auto-completion search\n config.add_route('search_packages', '/search/packages')\n config.add_route('latest_candidates', '/latest_candidates')\n config.add_route('latest_builds', '/latest_builds')\n\n config.add_route('captcha_image', '/captcha/{cipherkey}/')\n\n # pyramid.openid\n config.add_route('login', '/login')\n config.add_view('bodhi.server.security.login', route_name='login')\n config.add_view('bodhi.server.security.login', context=HTTPForbidden)\n config.add_route('logout', '/logout')\n config.add_view('bodhi.server.security.logout', route_name='logout')\n config.add_route('verify_openid', pattern='/dologin.html')\n config.add_view('pyramid_fas_openid.verify_openid', route_name='verify_openid')\n\n config.add_route('api_version', '/api_version')\n\n # The only user preference we have.\n config.add_route('popup_toggle', '/popup_toggle')\n\n config.scan('bodhi.server.views')\n config.scan('bodhi.server.services')\n config.scan('bodhi.server.captcha')\n\n return config.make_wsgi_app()\n", "path": "bodhi/server/__init__.py"}, {"content": "# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\nimport os\nimport logging\n\nfrom pyramid.paster import get_appsettings\n\nlog = logging.getLogger('bodhi')\n\n\ndef get_configfile():\n configfile = None\n setupdir = os.path.join(os.path.dirname(os.path.dirname(__file__)), '..')\n if configfile:\n if not os.path.exists(configfile):\n log.error(\"Cannot find config: %s\" % configfile)\n return\n else:\n for cfg in (os.path.join(setupdir, 'development.ini'),\n '/etc/bodhi/production.ini'):\n if os.path.exists(cfg):\n configfile = cfg\n break\n else:\n log.error(\"Unable to find configuration to load!\")\n return configfile\n\n\nclass BodhiConfig(dict):\n loaded = False\n\n def __getitem__(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).__getitem__(*args, **kw)\n\n def get(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).get(*args, **kw)\n\n def pop(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).pop(*args, **kw)\n\n def copy(self, *args, **kw):\n if not self.loaded:\n self.load_config()\n return super(BodhiConfig, self).copy(*args, **kw)\n\n def load_config(self):\n configfile = get_configfile()\n self.update(get_appsettings(configfile))\n self.loaded = True\n\n\nconfig = BodhiConfig()\n", "path": "bodhi/server/config.py"}]} | 3,945 | 415 |
gh_patches_debug_24286 | rasdani/github-patches | git_diff | e-valuation__EvaP-1822 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Response status code of failed redemption is 200
As @niklasmohrin remarked in [#1790](https://github.com/e-valuation/EvaP/pull/1790/files#r962983692), in `evap.rewards.views.redeem_reward_points`, the status code of failed redemptions (e.g. due to `NotEnoughPoints` or `RedemptionEventExpired`) is set as 200 OK, even though no redemption points were saved.
Instead, the status code should be something like 400 Bad Request to underline that something went wrong.
@niklasmohrin added, that `assertContains`, used in some tests in `evap.rewards.tests.test_views.TestIndexView`, needs to adopted, as it asserts that the status code is 200 by default.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/rewards/views.py`
Content:
```
1 from datetime import datetime
2
3 from django.contrib import messages
4 from django.core.exceptions import BadRequest, SuspiciousOperation
5 from django.http import HttpResponse
6 from django.shortcuts import get_object_or_404, redirect, render
7 from django.utils.translation import get_language
8 from django.utils.translation import gettext as _
9 from django.views.decorators.http import require_POST
10
11 from evap.evaluation.auth import manager_required, reward_user_required
12 from evap.evaluation.models import Semester
13 from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
14 from evap.rewards.exporters import RewardsExporter
15 from evap.rewards.forms import RewardPointRedemptionEventForm
16 from evap.rewards.models import (
17 NoPointsSelected,
18 NotEnoughPoints,
19 RedemptionEventExpired,
20 RewardPointGranting,
21 RewardPointRedemption,
22 RewardPointRedemptionEvent,
23 SemesterActivation,
24 )
25 from evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions
26 from evap.staff.views import semester_view
27
28
29 @reward_user_required
30 def index(request):
31 if request.method == "POST":
32 redemptions = {}
33 try:
34 for key, value in request.POST.items():
35 if key.startswith("points-"):
36 event_id = int(key.rpartition("-")[2])
37 redemptions[event_id] = int(value)
38 except ValueError as e:
39 raise BadRequest from e
40
41 try:
42 save_redemptions(request, redemptions)
43 messages.success(request, _("You successfully redeemed your points."))
44 except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:
45 messages.warning(request, error)
46
47 total_points_available = reward_points_of_user(request.user)
48 reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)
49 reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)
50 events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
51
52 reward_point_actions = []
53 for granting in reward_point_grantings:
54 reward_point_actions.append(
55 (granting.granting_time, _("Reward for") + " " + granting.semester.name, granting.value, "")
56 )
57 for redemption in reward_point_redemptions:
58 reward_point_actions.append((redemption.redemption_time, redemption.event.name, "", redemption.value))
59
60 reward_point_actions.sort(key=lambda action: action[0], reverse=True)
61
62 template_data = dict(
63 reward_point_actions=reward_point_actions,
64 total_points_available=total_points_available,
65 events=events,
66 )
67 return render(request, "rewards_index.html", template_data)
68
69
70 @manager_required
71 def reward_point_redemption_events(request):
72 upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
73 past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by("-date")
74 template_data = dict(upcoming_events=upcoming_events, past_events=past_events)
75 return render(request, "rewards_reward_point_redemption_events.html", template_data)
76
77
78 @manager_required
79 def reward_point_redemption_event_create(request):
80 event = RewardPointRedemptionEvent()
81 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
82
83 if form.is_valid():
84 form.save()
85 messages.success(request, _("Successfully created event."))
86 return redirect("rewards:reward_point_redemption_events")
87
88 return render(request, "rewards_reward_point_redemption_event_form.html", dict(form=form))
89
90
91 @manager_required
92 def reward_point_redemption_event_edit(request, event_id):
93 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
94 form = RewardPointRedemptionEventForm(request.POST or None, instance=event)
95
96 if form.is_valid():
97 event = form.save()
98
99 messages.success(request, _("Successfully updated event."))
100 return redirect("rewards:reward_point_redemption_events")
101
102 return render(request, "rewards_reward_point_redemption_event_form.html", dict(event=event, form=form))
103
104
105 @require_POST
106 @manager_required
107 def reward_point_redemption_event_delete(request):
108 event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, "event_id")
109
110 if not event.can_delete:
111 raise SuspiciousOperation("Deleting redemption event not allowed")
112 event.delete()
113 return HttpResponse() # 200 OK
114
115
116 @manager_required
117 def reward_point_redemption_event_export(request, event_id):
118 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
119
120 filename = _("RewardPoints") + f"-{event.date}-{event.name}-{get_language()}.xls"
121 response = AttachmentResponse(filename, content_type="application/vnd.ms-excel")
122
123 RewardsExporter().export(response, event.redemptions_by_user())
124
125 return response
126
127
128 @manager_required
129 def semester_activation(request, semester_id, active):
130 semester = get_object_or_404(Semester, id=semester_id)
131 active = active == "on"
132
133 SemesterActivation.objects.update_or_create(semester=semester, defaults={"is_active": active})
134 if active:
135 grant_eligible_reward_points_for_semester(request, semester)
136
137 return semester_view(request=request, semester_id=semester_id)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/rewards/views.py b/evap/rewards/views.py
--- a/evap/rewards/views.py
+++ b/evap/rewards/views.py
@@ -28,6 +28,8 @@
@reward_user_required
def index(request):
+ # pylint: disable=too-many-locals
+ status = 200
if request.method == "POST":
redemptions = {}
try:
@@ -43,6 +45,7 @@
messages.success(request, _("You successfully redeemed your points."))
except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:
messages.warning(request, error)
+ status = 400
total_points_available = reward_points_of_user(request.user)
reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)
@@ -64,7 +67,7 @@
total_points_available=total_points_available,
events=events,
)
- return render(request, "rewards_index.html", template_data)
+ return render(request, "rewards_index.html", template_data, status=status)
@manager_required
| {"golden_diff": "diff --git a/evap/rewards/views.py b/evap/rewards/views.py\n--- a/evap/rewards/views.py\n+++ b/evap/rewards/views.py\n@@ -28,6 +28,8 @@\n \n @reward_user_required\n def index(request):\n+ # pylint: disable=too-many-locals\n+ status = 200\n if request.method == \"POST\":\n redemptions = {}\n try:\n@@ -43,6 +45,7 @@\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:\n messages.warning(request, error)\n+ status = 400\n \n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n@@ -64,7 +67,7 @@\n total_points_available=total_points_available,\n events=events,\n )\n- return render(request, \"rewards_index.html\", template_data)\n+ return render(request, \"rewards_index.html\", template_data, status=status)\n \n \n @manager_required\n", "issue": "Response status code of failed redemption is 200\nAs @niklasmohrin remarked in [#1790](https://github.com/e-valuation/EvaP/pull/1790/files#r962983692), in `evap.rewards.views.redeem_reward_points`, the status code of failed redemptions (e.g. due to `NotEnoughPoints` or `RedemptionEventExpired`) is set as 200 OK, even though no redemption points were saved. \r\n\r\nInstead, the status code should be something like 400 Bad Request to underline that something went wrong.\r\n@niklasmohrin added, that `assertContains`, used in some tests in `evap.rewards.tests.test_views.TestIndexView`, needs to adopted, as it asserts that the status code is 200 by default.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_POST\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelected,\n NotEnoughPoints,\n RedemptionEventExpired,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\nfrom evap.staff.views import semester_view\n\n\n@reward_user_required\ndef index(request):\n if request.method == \"POST\":\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n except ValueError as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:\n messages.warning(request, error)\n\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n reward_point_actions = []\n for granting in reward_point_grantings:\n reward_point_actions.append(\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n )\n for redemption in reward_point_redemptions:\n reward_point_actions.append((redemption.redemption_time, redemption.event.name, \"\", redemption.value))\n\n reward_point_actions.sort(key=lambda action: action[0], reverse=True)\n\n template_data = dict(\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n )\n return render(request, \"rewards_index.html\", template_data)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n template_data = dict(upcoming_events=upcoming_events, past_events=past_events)\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\ndef reward_point_redemption_event_create(request):\n event = RewardPointRedemptionEvent()\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Successfully created event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(form=form))\n\n\n@manager_required\ndef reward_point_redemption_event_edit(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n event = form.save()\n\n messages.success(request, _(\"Successfully updated event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(event=event, form=form))\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@manager_required\ndef semester_activation(request, semester_id, active):\n semester = get_object_or_404(Semester, id=semester_id)\n active = active == \"on\"\n\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n\n return semester_view(request=request, semester_id=semester_id)\n", "path": "evap/rewards/views.py"}], "after_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_POST\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelected,\n NotEnoughPoints,\n RedemptionEventExpired,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\nfrom evap.staff.views import semester_view\n\n\n@reward_user_required\ndef index(request):\n # pylint: disable=too-many-locals\n status = 200\n if request.method == \"POST\":\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n except ValueError as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (NoPointsSelected, NotEnoughPoints, RedemptionEventExpired) as error:\n messages.warning(request, error)\n status = 400\n\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n reward_point_actions = []\n for granting in reward_point_grantings:\n reward_point_actions.append(\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n )\n for redemption in reward_point_redemptions:\n reward_point_actions.append((redemption.redemption_time, redemption.event.name, \"\", redemption.value))\n\n reward_point_actions.sort(key=lambda action: action[0], reverse=True)\n\n template_data = dict(\n reward_point_actions=reward_point_actions,\n total_points_available=total_points_available,\n events=events,\n )\n return render(request, \"rewards_index.html\", template_data, status=status)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n template_data = dict(upcoming_events=upcoming_events, past_events=past_events)\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\ndef reward_point_redemption_event_create(request):\n event = RewardPointRedemptionEvent()\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n form.save()\n messages.success(request, _(\"Successfully created event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(form=form))\n\n\n@manager_required\ndef reward_point_redemption_event_edit(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n form = RewardPointRedemptionEventForm(request.POST or None, instance=event)\n\n if form.is_valid():\n event = form.save()\n\n messages.success(request, _(\"Successfully updated event.\"))\n return redirect(\"rewards:reward_point_redemption_events\")\n\n return render(request, \"rewards_reward_point_redemption_event_form.html\", dict(event=event, form=form))\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@manager_required\ndef semester_activation(request, semester_id, active):\n semester = get_object_or_404(Semester, id=semester_id)\n active = active == \"on\"\n\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n\n return semester_view(request=request, semester_id=semester_id)\n", "path": "evap/rewards/views.py"}]} | 1,924 | 255 |
gh_patches_debug_13133 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-701 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump core agent version to 1.4.0
Please update the Python agent dependency of core agent to core agent v1.4.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/core/config.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 import os
6 import re
7 import warnings
8
9 from scout_apm.compat import string_type
10 from scout_apm.core import platform_detection
11
12 logger = logging.getLogger(__name__)
13
14 key_regex = re.compile(r"[a-zA-Z0-9]{16}")
15
16
17 class ScoutConfig(object):
18 """
19 Configuration object for the ScoutApm agent.
20
21 Contains a list of configuration "layers". When a configuration key is
22 looked up, each layer is asked in turn if it knows the value. The first one
23 to answer affirmatively returns the value.
24 """
25
26 def __init__(self):
27 self.layers = [
28 Env(),
29 Python(),
30 Derived(self),
31 Defaults(),
32 Null(),
33 ]
34
35 def value(self, key):
36 value = self.locate_layer_for_key(key).value(key)
37 if key in CONVERSIONS:
38 return CONVERSIONS[key](value)
39 return value
40
41 def locate_layer_for_key(self, key):
42 for layer in self.layers:
43 if layer.has_config(key):
44 return layer
45
46 # Should be unreachable because Null returns None for all keys.
47 raise ValueError("key {!r} not found in any layer".format(key))
48
49 def log(self):
50 logger.debug("Configuration Loaded:")
51 for key in self.known_keys:
52 if key in self.secret_keys:
53 continue
54
55 layer = self.locate_layer_for_key(key)
56 logger.debug(
57 "%-9s: %s = %s",
58 layer.__class__.__name__,
59 key,
60 layer.value(key),
61 )
62
63 known_keys = [
64 "app_server",
65 "application_root",
66 "collect_remote_ip",
67 "core_agent_config_file",
68 "core_agent_dir",
69 "core_agent_download",
70 "core_agent_launch",
71 "core_agent_log_file",
72 "core_agent_log_level",
73 "core_agent_permissions",
74 "core_agent_socket_path",
75 "core_agent_version",
76 "disabled_instruments",
77 "download_url",
78 "framework",
79 "framework_version",
80 "hostname",
81 "ignore",
82 "key",
83 "log_level",
84 "monitor",
85 "name",
86 "revision_sha",
87 "scm_subdirectory",
88 "shutdown_message_enabled",
89 "shutdown_timeout_seconds",
90 ]
91
92 secret_keys = {"key"}
93
94 def core_agent_permissions(self):
95 try:
96 return int(str(self.value("core_agent_permissions")), 8)
97 except ValueError:
98 logger.exception(
99 "Invalid core_agent_permissions value, using default of 0o700"
100 )
101 return 0o700
102
103 @classmethod
104 def set(cls, **kwargs):
105 """
106 Sets a configuration value for the Scout agent. Values set here will
107 not override values set in ENV.
108 """
109 for key, value in kwargs.items():
110 SCOUT_PYTHON_VALUES[key] = value
111
112 @classmethod
113 def unset(cls, *keys):
114 """
115 Removes a configuration value for the Scout agent.
116 """
117 for key in keys:
118 SCOUT_PYTHON_VALUES.pop(key, None)
119
120 @classmethod
121 def reset_all(cls):
122 """
123 Remove all configuration settings set via `ScoutConfig.set(...)`.
124
125 This is meant for use in testing.
126 """
127 SCOUT_PYTHON_VALUES.clear()
128
129
130 # Module-level data, the ScoutConfig.set(key="value") adds to this
131 SCOUT_PYTHON_VALUES = {}
132
133
134 class Python(object):
135 """
136 A configuration overlay that lets other parts of python set values.
137 """
138
139 def has_config(self, key):
140 return key in SCOUT_PYTHON_VALUES
141
142 def value(self, key):
143 return SCOUT_PYTHON_VALUES[key]
144
145
146 class Env(object):
147 """
148 Reads configuration from environment by prefixing the key
149 requested with "SCOUT_"
150
151 Example: the `key` config looks for SCOUT_KEY
152 environment variable
153 """
154
155 def has_config(self, key):
156 env_key = self.modify_key(key)
157 return env_key in os.environ
158
159 def value(self, key):
160 env_key = self.modify_key(key)
161 return os.environ[env_key]
162
163 def modify_key(self, key):
164 env_key = ("SCOUT_" + key).upper()
165 return env_key
166
167
168 class Derived(object):
169 """
170 A configuration overlay that calculates from other values.
171 """
172
173 def __init__(self, config):
174 """
175 config argument is the overall ScoutConfig var, so we can lookup the
176 components of the derived info.
177 """
178 self.config = config
179
180 def has_config(self, key):
181 return self.lookup_func(key) is not None
182
183 def value(self, key):
184 return self.lookup_func(key)()
185
186 def lookup_func(self, key):
187 """
188 Returns the derive_#{key} function, or None if it isn't defined
189 """
190 func_name = "derive_" + key
191 return getattr(self, func_name, None)
192
193 def derive_core_agent_full_name(self):
194 triple = self.config.value("core_agent_triple")
195 if not platform_detection.is_valid_triple(triple):
196 warnings.warn("Invalid value for core_agent_triple: {}".format(triple))
197 return "{name}-{version}-{triple}".format(
198 name="scout_apm_core",
199 version=self.config.value("core_agent_version"),
200 triple=triple,
201 )
202
203 def derive_core_agent_triple(self):
204 return platform_detection.get_triple()
205
206
207 class Defaults(object):
208 """
209 Provides default values for important configurations
210 """
211
212 def __init__(self):
213 self.defaults = {
214 "app_server": "",
215 "application_root": os.getcwd(),
216 "collect_remote_ip": True,
217 "core_agent_dir": "/tmp/scout_apm_core",
218 "core_agent_download": True,
219 "core_agent_launch": True,
220 "core_agent_log_level": "info",
221 "core_agent_permissions": 700,
222 "core_agent_socket_path": "tcp://127.0.0.1:6590",
223 "core_agent_version": "v1.3.1", # can be an exact tag name, or 'latest'
224 "disabled_instruments": [],
225 "download_url": "https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release", # noqa: B950
226 "errors_batch_size": 5,
227 "errors_enabled": True,
228 "errors_ignored_exceptions": (),
229 "errors_host": "https://errors.scoutapm.com",
230 "framework": "",
231 "framework_version": "",
232 "hostname": None,
233 "key": "",
234 "monitor": False,
235 "name": "Python App",
236 "revision_sha": self._git_revision_sha(),
237 "scm_subdirectory": "",
238 "shutdown_message_enabled": True,
239 "shutdown_timeout_seconds": 2.0,
240 "uri_reporting": "filtered_params",
241 }
242
243 def _git_revision_sha(self):
244 # N.B. The environment variable SCOUT_REVISION_SHA may also be used,
245 # but that will be picked up by Env
246 return os.environ.get("HEROKU_SLUG_COMMIT", "")
247
248 def has_config(self, key):
249 return key in self.defaults
250
251 def value(self, key):
252 return self.defaults[key]
253
254
255 class Null(object):
256 """
257 Always answers that a key is present, but the value is None
258
259 Used as the last step of the layered configuration.
260 """
261
262 def has_config(self, key):
263 return True
264
265 def value(self, key):
266 return None
267
268
269 def convert_to_bool(value):
270 if isinstance(value, bool):
271 return value
272 if isinstance(value, string_type):
273 return value.lower() in ("yes", "true", "t", "1")
274 # Unknown type - default to false?
275 return False
276
277
278 def convert_to_float(value):
279 try:
280 return float(value)
281 except ValueError:
282 return 0.0
283
284
285 def convert_to_list(value):
286 if isinstance(value, list):
287 return value
288 if isinstance(value, tuple):
289 return list(value)
290 if isinstance(value, string_type):
291 # Split on commas
292 return [item.strip() for item in value.split(",") if item]
293 # Unknown type - default to empty?
294 return []
295
296
297 CONVERSIONS = {
298 "collect_remote_ip": convert_to_bool,
299 "core_agent_download": convert_to_bool,
300 "core_agent_launch": convert_to_bool,
301 "disabled_instruments": convert_to_list,
302 "ignore": convert_to_list,
303 "monitor": convert_to_bool,
304 "shutdown_message_enabled": convert_to_bool,
305 "shutdown_timeout_seconds": convert_to_float,
306 }
307
308
309 scout_config = ScoutConfig()
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py
--- a/src/scout_apm/core/config.py
+++ b/src/scout_apm/core/config.py
@@ -220,7 +220,7 @@
"core_agent_log_level": "info",
"core_agent_permissions": 700,
"core_agent_socket_path": "tcp://127.0.0.1:6590",
- "core_agent_version": "v1.3.1", # can be an exact tag name, or 'latest'
+ "core_agent_version": "v1.4.0", # can be an exact tag name, or 'latest'
"disabled_instruments": [],
"download_url": "https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release", # noqa: B950
"errors_batch_size": 5,
| {"golden_diff": "diff --git a/src/scout_apm/core/config.py b/src/scout_apm/core/config.py\n--- a/src/scout_apm/core/config.py\n+++ b/src/scout_apm/core/config.py\n@@ -220,7 +220,7 @@\n \"core_agent_log_level\": \"info\",\n \"core_agent_permissions\": 700,\n \"core_agent_socket_path\": \"tcp://127.0.0.1:6590\",\n- \"core_agent_version\": \"v1.3.1\", # can be an exact tag name, or 'latest'\n+ \"core_agent_version\": \"v1.4.0\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: B950\n \"errors_batch_size\": 5,\n", "issue": "Bump core agent version to 1.4.0\nPlease update the Python agent dependency of core agent to core agent v1.4.0.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport os\nimport re\nimport warnings\n\nfrom scout_apm.compat import string_type\nfrom scout_apm.core import platform_detection\n\nlogger = logging.getLogger(__name__)\n\nkey_regex = re.compile(r\"[a-zA-Z0-9]{16}\")\n\n\nclass ScoutConfig(object):\n \"\"\"\n Configuration object for the ScoutApm agent.\n\n Contains a list of configuration \"layers\". When a configuration key is\n looked up, each layer is asked in turn if it knows the value. The first one\n to answer affirmatively returns the value.\n \"\"\"\n\n def __init__(self):\n self.layers = [\n Env(),\n Python(),\n Derived(self),\n Defaults(),\n Null(),\n ]\n\n def value(self, key):\n value = self.locate_layer_for_key(key).value(key)\n if key in CONVERSIONS:\n return CONVERSIONS[key](value)\n return value\n\n def locate_layer_for_key(self, key):\n for layer in self.layers:\n if layer.has_config(key):\n return layer\n\n # Should be unreachable because Null returns None for all keys.\n raise ValueError(\"key {!r} not found in any layer\".format(key))\n\n def log(self):\n logger.debug(\"Configuration Loaded:\")\n for key in self.known_keys:\n if key in self.secret_keys:\n continue\n\n layer = self.locate_layer_for_key(key)\n logger.debug(\n \"%-9s: %s = %s\",\n layer.__class__.__name__,\n key,\n layer.value(key),\n )\n\n known_keys = [\n \"app_server\",\n \"application_root\",\n \"collect_remote_ip\",\n \"core_agent_config_file\",\n \"core_agent_dir\",\n \"core_agent_download\",\n \"core_agent_launch\",\n \"core_agent_log_file\",\n \"core_agent_log_level\",\n \"core_agent_permissions\",\n \"core_agent_socket_path\",\n \"core_agent_version\",\n \"disabled_instruments\",\n \"download_url\",\n \"framework\",\n \"framework_version\",\n \"hostname\",\n \"ignore\",\n \"key\",\n \"log_level\",\n \"monitor\",\n \"name\",\n \"revision_sha\",\n \"scm_subdirectory\",\n \"shutdown_message_enabled\",\n \"shutdown_timeout_seconds\",\n ]\n\n secret_keys = {\"key\"}\n\n def core_agent_permissions(self):\n try:\n return int(str(self.value(\"core_agent_permissions\")), 8)\n except ValueError:\n logger.exception(\n \"Invalid core_agent_permissions value, using default of 0o700\"\n )\n return 0o700\n\n @classmethod\n def set(cls, **kwargs):\n \"\"\"\n Sets a configuration value for the Scout agent. Values set here will\n not override values set in ENV.\n \"\"\"\n for key, value in kwargs.items():\n SCOUT_PYTHON_VALUES[key] = value\n\n @classmethod\n def unset(cls, *keys):\n \"\"\"\n Removes a configuration value for the Scout agent.\n \"\"\"\n for key in keys:\n SCOUT_PYTHON_VALUES.pop(key, None)\n\n @classmethod\n def reset_all(cls):\n \"\"\"\n Remove all configuration settings set via `ScoutConfig.set(...)`.\n\n This is meant for use in testing.\n \"\"\"\n SCOUT_PYTHON_VALUES.clear()\n\n\n# Module-level data, the ScoutConfig.set(key=\"value\") adds to this\nSCOUT_PYTHON_VALUES = {}\n\n\nclass Python(object):\n \"\"\"\n A configuration overlay that lets other parts of python set values.\n \"\"\"\n\n def has_config(self, key):\n return key in SCOUT_PYTHON_VALUES\n\n def value(self, key):\n return SCOUT_PYTHON_VALUES[key]\n\n\nclass Env(object):\n \"\"\"\n Reads configuration from environment by prefixing the key\n requested with \"SCOUT_\"\n\n Example: the `key` config looks for SCOUT_KEY\n environment variable\n \"\"\"\n\n def has_config(self, key):\n env_key = self.modify_key(key)\n return env_key in os.environ\n\n def value(self, key):\n env_key = self.modify_key(key)\n return os.environ[env_key]\n\n def modify_key(self, key):\n env_key = (\"SCOUT_\" + key).upper()\n return env_key\n\n\nclass Derived(object):\n \"\"\"\n A configuration overlay that calculates from other values.\n \"\"\"\n\n def __init__(self, config):\n \"\"\"\n config argument is the overall ScoutConfig var, so we can lookup the\n components of the derived info.\n \"\"\"\n self.config = config\n\n def has_config(self, key):\n return self.lookup_func(key) is not None\n\n def value(self, key):\n return self.lookup_func(key)()\n\n def lookup_func(self, key):\n \"\"\"\n Returns the derive_#{key} function, or None if it isn't defined\n \"\"\"\n func_name = \"derive_\" + key\n return getattr(self, func_name, None)\n\n def derive_core_agent_full_name(self):\n triple = self.config.value(\"core_agent_triple\")\n if not platform_detection.is_valid_triple(triple):\n warnings.warn(\"Invalid value for core_agent_triple: {}\".format(triple))\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n triple=triple,\n )\n\n def derive_core_agent_triple(self):\n return platform_detection.get_triple()\n\n\nclass Defaults(object):\n \"\"\"\n Provides default values for important configurations\n \"\"\"\n\n def __init__(self):\n self.defaults = {\n \"app_server\": \"\",\n \"application_root\": os.getcwd(),\n \"collect_remote_ip\": True,\n \"core_agent_dir\": \"/tmp/scout_apm_core\",\n \"core_agent_download\": True,\n \"core_agent_launch\": True,\n \"core_agent_log_level\": \"info\",\n \"core_agent_permissions\": 700,\n \"core_agent_socket_path\": \"tcp://127.0.0.1:6590\",\n \"core_agent_version\": \"v1.3.1\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: B950\n \"errors_batch_size\": 5,\n \"errors_enabled\": True,\n \"errors_ignored_exceptions\": (),\n \"errors_host\": \"https://errors.scoutapm.com\",\n \"framework\": \"\",\n \"framework_version\": \"\",\n \"hostname\": None,\n \"key\": \"\",\n \"monitor\": False,\n \"name\": \"Python App\",\n \"revision_sha\": self._git_revision_sha(),\n \"scm_subdirectory\": \"\",\n \"shutdown_message_enabled\": True,\n \"shutdown_timeout_seconds\": 2.0,\n \"uri_reporting\": \"filtered_params\",\n }\n\n def _git_revision_sha(self):\n # N.B. The environment variable SCOUT_REVISION_SHA may also be used,\n # but that will be picked up by Env\n return os.environ.get(\"HEROKU_SLUG_COMMIT\", \"\")\n\n def has_config(self, key):\n return key in self.defaults\n\n def value(self, key):\n return self.defaults[key]\n\n\nclass Null(object):\n \"\"\"\n Always answers that a key is present, but the value is None\n\n Used as the last step of the layered configuration.\n \"\"\"\n\n def has_config(self, key):\n return True\n\n def value(self, key):\n return None\n\n\ndef convert_to_bool(value):\n if isinstance(value, bool):\n return value\n if isinstance(value, string_type):\n return value.lower() in (\"yes\", \"true\", \"t\", \"1\")\n # Unknown type - default to false?\n return False\n\n\ndef convert_to_float(value):\n try:\n return float(value)\n except ValueError:\n return 0.0\n\n\ndef convert_to_list(value):\n if isinstance(value, list):\n return value\n if isinstance(value, tuple):\n return list(value)\n if isinstance(value, string_type):\n # Split on commas\n return [item.strip() for item in value.split(\",\") if item]\n # Unknown type - default to empty?\n return []\n\n\nCONVERSIONS = {\n \"collect_remote_ip\": convert_to_bool,\n \"core_agent_download\": convert_to_bool,\n \"core_agent_launch\": convert_to_bool,\n \"disabled_instruments\": convert_to_list,\n \"ignore\": convert_to_list,\n \"monitor\": convert_to_bool,\n \"shutdown_message_enabled\": convert_to_bool,\n \"shutdown_timeout_seconds\": convert_to_float,\n}\n\n\nscout_config = ScoutConfig()\n", "path": "src/scout_apm/core/config.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nimport os\nimport re\nimport warnings\n\nfrom scout_apm.compat import string_type\nfrom scout_apm.core import platform_detection\n\nlogger = logging.getLogger(__name__)\n\nkey_regex = re.compile(r\"[a-zA-Z0-9]{16}\")\n\n\nclass ScoutConfig(object):\n \"\"\"\n Configuration object for the ScoutApm agent.\n\n Contains a list of configuration \"layers\". When a configuration key is\n looked up, each layer is asked in turn if it knows the value. The first one\n to answer affirmatively returns the value.\n \"\"\"\n\n def __init__(self):\n self.layers = [\n Env(),\n Python(),\n Derived(self),\n Defaults(),\n Null(),\n ]\n\n def value(self, key):\n value = self.locate_layer_for_key(key).value(key)\n if key in CONVERSIONS:\n return CONVERSIONS[key](value)\n return value\n\n def locate_layer_for_key(self, key):\n for layer in self.layers:\n if layer.has_config(key):\n return layer\n\n # Should be unreachable because Null returns None for all keys.\n raise ValueError(\"key {!r} not found in any layer\".format(key))\n\n def log(self):\n logger.debug(\"Configuration Loaded:\")\n for key in self.known_keys:\n if key in self.secret_keys:\n continue\n\n layer = self.locate_layer_for_key(key)\n logger.debug(\n \"%-9s: %s = %s\",\n layer.__class__.__name__,\n key,\n layer.value(key),\n )\n\n known_keys = [\n \"app_server\",\n \"application_root\",\n \"collect_remote_ip\",\n \"core_agent_config_file\",\n \"core_agent_dir\",\n \"core_agent_download\",\n \"core_agent_launch\",\n \"core_agent_log_file\",\n \"core_agent_log_level\",\n \"core_agent_permissions\",\n \"core_agent_socket_path\",\n \"core_agent_version\",\n \"disabled_instruments\",\n \"download_url\",\n \"framework\",\n \"framework_version\",\n \"hostname\",\n \"ignore\",\n \"key\",\n \"log_level\",\n \"monitor\",\n \"name\",\n \"revision_sha\",\n \"scm_subdirectory\",\n \"shutdown_message_enabled\",\n \"shutdown_timeout_seconds\",\n ]\n\n secret_keys = {\"key\"}\n\n def core_agent_permissions(self):\n try:\n return int(str(self.value(\"core_agent_permissions\")), 8)\n except ValueError:\n logger.exception(\n \"Invalid core_agent_permissions value, using default of 0o700\"\n )\n return 0o700\n\n @classmethod\n def set(cls, **kwargs):\n \"\"\"\n Sets a configuration value for the Scout agent. Values set here will\n not override values set in ENV.\n \"\"\"\n for key, value in kwargs.items():\n SCOUT_PYTHON_VALUES[key] = value\n\n @classmethod\n def unset(cls, *keys):\n \"\"\"\n Removes a configuration value for the Scout agent.\n \"\"\"\n for key in keys:\n SCOUT_PYTHON_VALUES.pop(key, None)\n\n @classmethod\n def reset_all(cls):\n \"\"\"\n Remove all configuration settings set via `ScoutConfig.set(...)`.\n\n This is meant for use in testing.\n \"\"\"\n SCOUT_PYTHON_VALUES.clear()\n\n\n# Module-level data, the ScoutConfig.set(key=\"value\") adds to this\nSCOUT_PYTHON_VALUES = {}\n\n\nclass Python(object):\n \"\"\"\n A configuration overlay that lets other parts of python set values.\n \"\"\"\n\n def has_config(self, key):\n return key in SCOUT_PYTHON_VALUES\n\n def value(self, key):\n return SCOUT_PYTHON_VALUES[key]\n\n\nclass Env(object):\n \"\"\"\n Reads configuration from environment by prefixing the key\n requested with \"SCOUT_\"\n\n Example: the `key` config looks for SCOUT_KEY\n environment variable\n \"\"\"\n\n def has_config(self, key):\n env_key = self.modify_key(key)\n return env_key in os.environ\n\n def value(self, key):\n env_key = self.modify_key(key)\n return os.environ[env_key]\n\n def modify_key(self, key):\n env_key = (\"SCOUT_\" + key).upper()\n return env_key\n\n\nclass Derived(object):\n \"\"\"\n A configuration overlay that calculates from other values.\n \"\"\"\n\n def __init__(self, config):\n \"\"\"\n config argument is the overall ScoutConfig var, so we can lookup the\n components of the derived info.\n \"\"\"\n self.config = config\n\n def has_config(self, key):\n return self.lookup_func(key) is not None\n\n def value(self, key):\n return self.lookup_func(key)()\n\n def lookup_func(self, key):\n \"\"\"\n Returns the derive_#{key} function, or None if it isn't defined\n \"\"\"\n func_name = \"derive_\" + key\n return getattr(self, func_name, None)\n\n def derive_core_agent_full_name(self):\n triple = self.config.value(\"core_agent_triple\")\n if not platform_detection.is_valid_triple(triple):\n warnings.warn(\"Invalid value for core_agent_triple: {}\".format(triple))\n return \"{name}-{version}-{triple}\".format(\n name=\"scout_apm_core\",\n version=self.config.value(\"core_agent_version\"),\n triple=triple,\n )\n\n def derive_core_agent_triple(self):\n return platform_detection.get_triple()\n\n\nclass Defaults(object):\n \"\"\"\n Provides default values for important configurations\n \"\"\"\n\n def __init__(self):\n self.defaults = {\n \"app_server\": \"\",\n \"application_root\": os.getcwd(),\n \"collect_remote_ip\": True,\n \"core_agent_dir\": \"/tmp/scout_apm_core\",\n \"core_agent_download\": True,\n \"core_agent_launch\": True,\n \"core_agent_log_level\": \"info\",\n \"core_agent_permissions\": 700,\n \"core_agent_socket_path\": \"tcp://127.0.0.1:6590\",\n \"core_agent_version\": \"v1.4.0\", # can be an exact tag name, or 'latest'\n \"disabled_instruments\": [],\n \"download_url\": \"https://s3-us-west-1.amazonaws.com/scout-public-downloads/apm_core_agent/release\", # noqa: B950\n \"errors_batch_size\": 5,\n \"errors_enabled\": True,\n \"errors_ignored_exceptions\": (),\n \"errors_host\": \"https://errors.scoutapm.com\",\n \"framework\": \"\",\n \"framework_version\": \"\",\n \"hostname\": None,\n \"key\": \"\",\n \"monitor\": False,\n \"name\": \"Python App\",\n \"revision_sha\": self._git_revision_sha(),\n \"scm_subdirectory\": \"\",\n \"shutdown_message_enabled\": True,\n \"shutdown_timeout_seconds\": 2.0,\n \"uri_reporting\": \"filtered_params\",\n }\n\n def _git_revision_sha(self):\n # N.B. The environment variable SCOUT_REVISION_SHA may also be used,\n # but that will be picked up by Env\n return os.environ.get(\"HEROKU_SLUG_COMMIT\", \"\")\n\n def has_config(self, key):\n return key in self.defaults\n\n def value(self, key):\n return self.defaults[key]\n\n\nclass Null(object):\n \"\"\"\n Always answers that a key is present, but the value is None\n\n Used as the last step of the layered configuration.\n \"\"\"\n\n def has_config(self, key):\n return True\n\n def value(self, key):\n return None\n\n\ndef convert_to_bool(value):\n if isinstance(value, bool):\n return value\n if isinstance(value, string_type):\n return value.lower() in (\"yes\", \"true\", \"t\", \"1\")\n # Unknown type - default to false?\n return False\n\n\ndef convert_to_float(value):\n try:\n return float(value)\n except ValueError:\n return 0.0\n\n\ndef convert_to_list(value):\n if isinstance(value, list):\n return value\n if isinstance(value, tuple):\n return list(value)\n if isinstance(value, string_type):\n # Split on commas\n return [item.strip() for item in value.split(\",\") if item]\n # Unknown type - default to empty?\n return []\n\n\nCONVERSIONS = {\n \"collect_remote_ip\": convert_to_bool,\n \"core_agent_download\": convert_to_bool,\n \"core_agent_launch\": convert_to_bool,\n \"disabled_instruments\": convert_to_list,\n \"ignore\": convert_to_list,\n \"monitor\": convert_to_bool,\n \"shutdown_message_enabled\": convert_to_bool,\n \"shutdown_timeout_seconds\": convert_to_float,\n}\n\n\nscout_config = ScoutConfig()\n", "path": "src/scout_apm/core/config.py"}]} | 3,031 | 212 |
gh_patches_debug_54565 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-2832 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
set colorama upper bound to <0.4.4
colorama v0.4.4 (released in the last 24 hours) is missing an sdist, which trips up the homebrew packaging step of our [dbt release flow](https://github.com/fishtown-analytics/dbt-release/runs/1249693542). Let's set the [upper bound](https://github.com/fishtown-analytics/dbt/blob/dev/kiyoshi-kuromiya/core/setup.py#L67) to <0.4.4 instead of <0.5 for now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 def read(fname):
23 return open(os.path.join(os.path.dirname(__file__), fname)).read()
24
25
26 package_name = "dbt-core"
27 package_version = "0.18.1rc1"
28 description = """dbt (data build tool) is a command line tool that helps \
29 analysts and engineers transform data in their warehouse more effectively"""
30
31
32 setup(
33 name=package_name,
34 version=package_version,
35 description=description,
36 long_description=description,
37 author="Fishtown Analytics",
38 author_email="[email protected]",
39 url="https://github.com/fishtown-analytics/dbt",
40 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
41 package_data={
42 'dbt': [
43 'include/index.html',
44 'include/global_project/dbt_project.yml',
45 'include/global_project/docs/*.md',
46 'include/global_project/macros/*.sql',
47 'include/global_project/macros/**/*.sql',
48 'include/global_project/macros/**/**/*.sql',
49 'py.typed',
50 ]
51 },
52 test_suite='test',
53 entry_points={
54 'console_scripts': [
55 'dbt = dbt.main:main',
56 ],
57 },
58 scripts=[
59 'scripts/dbt',
60 ],
61 install_requires=[
62 'Jinja2==2.11.2',
63 'PyYAML>=3.11',
64 'sqlparse>=0.2.3,<0.4',
65 'networkx>=2.3,<3',
66 'minimal-snowplow-tracker==0.0.2',
67 'colorama>=0.3.9,<0.5',
68 'agate>=1.6,<2',
69 'isodate>=0.6,<0.7',
70 'json-rpc>=1.12,<2',
71 'werkzeug>=0.15,<0.17',
72 'dataclasses==0.6;python_version<"3.7"',
73 'hologram==0.0.10',
74 'logbook>=1.5,<1.6',
75 'typing-extensions>=3.7.4,<3.8',
76 # the following are all to match snowflake-connector-python
77 'requests>=2.18.0,<2.24.0',
78 'idna<2.10',
79 'cffi>=1.9,<1.15',
80 ],
81 zip_safe=False,
82 classifiers=[
83 'Development Status :: 5 - Production/Stable',
84
85 'License :: OSI Approved :: Apache Software License',
86
87 'Operating System :: Microsoft :: Windows',
88 'Operating System :: MacOS :: MacOS X',
89 'Operating System :: POSIX :: Linux',
90
91 'Programming Language :: Python :: 3.6',
92 'Programming Language :: Python :: 3.7',
93 'Programming Language :: Python :: 3.8',
94 ],
95 python_requires=">=3.6.3",
96 )
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -64,7 +64,7 @@
'sqlparse>=0.2.3,<0.4',
'networkx>=2.3,<3',
'minimal-snowplow-tracker==0.0.2',
- 'colorama>=0.3.9,<0.5',
+ 'colorama>=0.3.9,<0.4.4',
'agate>=1.6,<2',
'isodate>=0.6,<0.7',
'json-rpc>=1.12,<2',
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -64,7 +64,7 @@\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n- 'colorama>=0.3.9,<0.5',\n+ 'colorama>=0.3.9,<0.4.4',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n", "issue": "set colorama upper bound to <0.4.4\ncolorama v0.4.4 (released in the last 24 hours) is missing an sdist, which trips up the homebrew packaging step of our [dbt release flow](https://github.com/fishtown-analytics/dbt-release/runs/1249693542). Let's set the [upper bound](https://github.com/fishtown-analytics/dbt/blob/dev/kiyoshi-kuromiya/core/setup.py#L67) to <0.4.4 instead of <0.5 for now.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"0.18.1rc1\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/index.html',\n 'include/global_project/dbt_project.yml',\n 'include/global_project/docs/*.md',\n 'include/global_project/macros/*.sql',\n 'include/global_project/macros/**/*.sql',\n 'include/global_project/macros/**/**/*.sql',\n 'py.typed',\n ]\n },\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.2',\n 'PyYAML>=3.11',\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n 'colorama>=0.3.9,<0.5',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<0.17',\n 'dataclasses==0.6;python_version<\"3.7\"',\n 'hologram==0.0.10',\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n 'requests>=2.18.0,<2.24.0',\n 'idna<2.10',\n 'cffi>=1.9,<1.15',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"0.18.1rc1\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/index.html',\n 'include/global_project/dbt_project.yml',\n 'include/global_project/docs/*.md',\n 'include/global_project/macros/*.sql',\n 'include/global_project/macros/**/*.sql',\n 'include/global_project/macros/**/**/*.sql',\n 'py.typed',\n ]\n },\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.2',\n 'PyYAML>=3.11',\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n 'colorama>=0.3.9,<0.4.4',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<0.17',\n 'dataclasses==0.6;python_version<\"3.7\"',\n 'hologram==0.0.10',\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n 'requests>=2.18.0,<2.24.0',\n 'idna<2.10',\n 'cffi>=1.9,<1.15',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}]} | 1,347 | 148 |
gh_patches_debug_13897 | rasdani/github-patches | git_diff | deis__deis-3272 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Domain name validation: w3.domain.com
I tried to add a domain to my app: `deis domains:add w3.domain.com`
and got the following error :
`{u'domain': [u'Hostname does not look valid.']}`
w3.domain.com is a valid domain.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `controller/api/serializers.py`
Content:
```
1 """
2 Classes to serialize the RESTful representation of Deis API models.
3 """
4
5 from __future__ import unicode_literals
6
7 import json
8 import re
9
10 from django.conf import settings
11 from django.contrib.auth.models import User
12 from django.utils import timezone
13 from rest_framework import serializers
14 from rest_framework.validators import UniqueTogetherValidator
15
16 from api import models
17
18
19 PROCTYPE_MATCH = re.compile(r'^(?P<type>[a-z]+)')
20 MEMLIMIT_MATCH = re.compile(r'^(?P<mem>[0-9]+[BbKkMmGg])$')
21 CPUSHARE_MATCH = re.compile(r'^(?P<cpu>[0-9]+)$')
22 TAGKEY_MATCH = re.compile(r'^[a-z]+$')
23 TAGVAL_MATCH = re.compile(r'^\w+$')
24
25
26 class JSONFieldSerializer(serializers.Field):
27 def to_representation(self, obj):
28 return obj
29
30 def to_internal_value(self, data):
31 try:
32 val = json.loads(data)
33 except TypeError:
34 val = data
35 return val
36
37
38 class ModelSerializer(serializers.ModelSerializer):
39
40 uuid = serializers.ReadOnlyField()
41
42 def get_validators(self):
43 """
44 Hack to remove DRF's UniqueTogetherValidator when it concerns the UUID.
45
46 See https://github.com/deis/deis/pull/2898#discussion_r23105147
47 """
48 validators = super(ModelSerializer, self).get_validators()
49 for v in validators:
50 if isinstance(v, UniqueTogetherValidator) and 'uuid' in v.fields:
51 validators.remove(v)
52 return validators
53
54
55 class UserSerializer(serializers.ModelSerializer):
56 class Meta:
57 model = User
58 fields = ['email', 'username', 'password', 'first_name', 'last_name', 'is_superuser',
59 'is_staff', 'groups', 'user_permissions', 'last_login', 'date_joined',
60 'is_active']
61 read_only_fields = ['is_superuser', 'is_staff', 'groups',
62 'user_permissions', 'last_login', 'date_joined', 'is_active']
63 extra_kwargs = {'password': {'write_only': True}}
64
65 def create(self, validated_data):
66 now = timezone.now()
67 user = User(
68 email=validated_data.get('email'),
69 username=validated_data.get('username'),
70 last_login=now,
71 date_joined=now,
72 is_active=True
73 )
74 if validated_data.get('first_name'):
75 user.first_name = validated_data['first_name']
76 if validated_data.get('last_name'):
77 user.last_name = validated_data['last_name']
78 user.set_password(validated_data['password'])
79 # Make the first signup an admin / superuser
80 if not User.objects.filter(is_superuser=True).exists():
81 user.is_superuser = user.is_staff = True
82 user.save()
83 return user
84
85
86 class AdminUserSerializer(serializers.ModelSerializer):
87 """Serialize admin status for a User model."""
88
89 class Meta:
90 model = User
91 fields = ['username', 'is_superuser']
92 read_only_fields = ['username']
93
94
95 class AppSerializer(ModelSerializer):
96 """Serialize a :class:`~api.models.App` model."""
97
98 owner = serializers.ReadOnlyField(source='owner.username')
99 structure = JSONFieldSerializer(required=False)
100 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
101 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
102
103 class Meta:
104 """Metadata options for a :class:`AppSerializer`."""
105 model = models.App
106 fields = ['uuid', 'id', 'owner', 'url', 'structure', 'created', 'updated']
107 read_only_fields = ['uuid']
108
109
110 class BuildSerializer(ModelSerializer):
111 """Serialize a :class:`~api.models.Build` model."""
112
113 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
114 owner = serializers.ReadOnlyField(source='owner.username')
115 procfile = JSONFieldSerializer(required=False)
116 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
117 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
118
119 class Meta:
120 """Metadata options for a :class:`BuildSerializer`."""
121 model = models.Build
122 fields = ['owner', 'app', 'image', 'sha', 'procfile', 'dockerfile', 'created',
123 'updated', 'uuid']
124 read_only_fields = ['uuid']
125
126
127 class ConfigSerializer(ModelSerializer):
128 """Serialize a :class:`~api.models.Config` model."""
129
130 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
131 owner = serializers.ReadOnlyField(source='owner.username')
132 values = JSONFieldSerializer(required=False)
133 memory = JSONFieldSerializer(required=False)
134 cpu = JSONFieldSerializer(required=False)
135 tags = JSONFieldSerializer(required=False)
136 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
137 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
138
139 class Meta:
140 """Metadata options for a :class:`ConfigSerializer`."""
141 model = models.Config
142
143 def validate_memory(self, value):
144 for k, v in value.items():
145 if v is None: # use NoneType to unset a value
146 continue
147 if not re.match(PROCTYPE_MATCH, k):
148 raise serializers.ValidationError("Process types can only contain [a-z]")
149 if not re.match(MEMLIMIT_MATCH, str(v)):
150 raise serializers.ValidationError(
151 "Limit format: <number><unit>, where unit = B, K, M or G")
152 return value
153
154 def validate_cpu(self, value):
155 for k, v in value.items():
156 if v is None: # use NoneType to unset a value
157 continue
158 if not re.match(PROCTYPE_MATCH, k):
159 raise serializers.ValidationError("Process types can only contain [a-z]")
160 shares = re.match(CPUSHARE_MATCH, str(v))
161 if not shares:
162 raise serializers.ValidationError("CPU shares must be an integer")
163 for v in shares.groupdict().values():
164 try:
165 i = int(v)
166 except ValueError:
167 raise serializers.ValidationError("CPU shares must be an integer")
168 if i > 1024 or i < 0:
169 raise serializers.ValidationError("CPU shares must be between 0 and 1024")
170 return value
171
172 def validate_tags(self, value):
173 for k, v in value.items():
174 if v is None: # use NoneType to unset a value
175 continue
176 if not re.match(TAGKEY_MATCH, k):
177 raise serializers.ValidationError("Tag keys can only contain [a-z]")
178 if not re.match(TAGVAL_MATCH, str(v)):
179 raise serializers.ValidationError("Invalid tag value")
180 return value
181
182
183 class ReleaseSerializer(ModelSerializer):
184 """Serialize a :class:`~api.models.Release` model."""
185
186 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
187 owner = serializers.ReadOnlyField(source='owner.username')
188 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
189 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
190
191 class Meta:
192 """Metadata options for a :class:`ReleaseSerializer`."""
193 model = models.Release
194
195
196 class ContainerSerializer(ModelSerializer):
197 """Serialize a :class:`~api.models.Container` model."""
198
199 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
200 owner = serializers.ReadOnlyField(source='owner.username')
201 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
202 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
203 release = serializers.SerializerMethodField()
204
205 class Meta:
206 """Metadata options for a :class:`ContainerSerializer`."""
207 model = models.Container
208 fields = ['owner', 'app', 'release', 'type', 'num', 'state', 'created', 'updated', 'uuid']
209
210 def get_release(self, obj):
211 return "v{}".format(obj.release.version)
212
213
214 class KeySerializer(ModelSerializer):
215 """Serialize a :class:`~api.models.Key` model."""
216
217 owner = serializers.ReadOnlyField(source='owner.username')
218 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
219 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
220
221 class Meta:
222 """Metadata options for a KeySerializer."""
223 model = models.Key
224
225
226 class DomainSerializer(ModelSerializer):
227 """Serialize a :class:`~api.models.Domain` model."""
228
229 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
230 owner = serializers.ReadOnlyField(source='owner.username')
231 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
232 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
233
234 class Meta:
235 """Metadata options for a :class:`DomainSerializer`."""
236 model = models.Domain
237 fields = ['uuid', 'owner', 'created', 'updated', 'app', 'domain']
238
239 def validate_domain(self, value):
240 """
241 Check that the hostname is valid
242 """
243 if len(value) > 255:
244 raise serializers.ValidationError('Hostname must be 255 characters or less.')
245 if value[-1:] == ".":
246 value = value[:-1] # strip exactly one dot from the right, if present
247 labels = value.split('.')
248 if labels[0] == '*':
249 raise serializers.ValidationError(
250 'Adding a wildcard subdomain is currently not supported.')
251 allowed = re.compile("^(?!-)[a-z0-9-]{1,63}(?<!-)$", re.IGNORECASE)
252 for label in labels:
253 match = allowed.match(label)
254 if not match or '--' in label or label[-1].isdigit() or label.isdigit():
255 raise serializers.ValidationError('Hostname does not look valid.')
256 if models.Domain.objects.filter(domain=value).exists():
257 raise serializers.ValidationError(
258 "The domain {} is already in use by another app".format(value))
259 return value
260
261
262 class PushSerializer(ModelSerializer):
263 """Serialize a :class:`~api.models.Push` model."""
264
265 app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())
266 owner = serializers.ReadOnlyField(source='owner.username')
267 created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
268 updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)
269
270 class Meta:
271 """Metadata options for a :class:`PushSerializer`."""
272 model = models.Push
273 fields = ['uuid', 'owner', 'app', 'sha', 'fingerprint', 'receive_user', 'receive_repo',
274 'ssh_connection', 'ssh_original_command', 'created', 'updated']
275
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/controller/api/serializers.py b/controller/api/serializers.py
--- a/controller/api/serializers.py
+++ b/controller/api/serializers.py
@@ -251,7 +251,8 @@
allowed = re.compile("^(?!-)[a-z0-9-]{1,63}(?<!-)$", re.IGNORECASE)
for label in labels:
match = allowed.match(label)
- if not match or '--' in label or label[-1].isdigit() or label.isdigit():
+ if not match or '--' in label or label.isdigit() or \
+ len(labels) == 1 and any(char.isdigit() for char in label):
raise serializers.ValidationError('Hostname does not look valid.')
if models.Domain.objects.filter(domain=value).exists():
raise serializers.ValidationError(
| {"golden_diff": "diff --git a/controller/api/serializers.py b/controller/api/serializers.py\n--- a/controller/api/serializers.py\n+++ b/controller/api/serializers.py\n@@ -251,7 +251,8 @@\n allowed = re.compile(\"^(?!-)[a-z0-9-]{1,63}(?<!-)$\", re.IGNORECASE)\n for label in labels:\n match = allowed.match(label)\n- if not match or '--' in label or label[-1].isdigit() or label.isdigit():\n+ if not match or '--' in label or label.isdigit() or \\\n+ len(labels) == 1 and any(char.isdigit() for char in label):\n raise serializers.ValidationError('Hostname does not look valid.')\n if models.Domain.objects.filter(domain=value).exists():\n raise serializers.ValidationError(\n", "issue": "Domain name validation: w3.domain.com\nI tried to add a domain to my app: `deis domains:add w3.domain.com`\nand got the following error :\n`{u'domain': [u'Hostname does not look valid.']}`\nw3.domain.com is a valid domain.\n\n", "before_files": [{"content": "\"\"\"\nClasses to serialize the RESTful representation of Deis API models.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport re\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.utils import timezone\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom api import models\n\n\nPROCTYPE_MATCH = re.compile(r'^(?P<type>[a-z]+)')\nMEMLIMIT_MATCH = re.compile(r'^(?P<mem>[0-9]+[BbKkMmGg])$')\nCPUSHARE_MATCH = re.compile(r'^(?P<cpu>[0-9]+)$')\nTAGKEY_MATCH = re.compile(r'^[a-z]+$')\nTAGVAL_MATCH = re.compile(r'^\\w+$')\n\n\nclass JSONFieldSerializer(serializers.Field):\n def to_representation(self, obj):\n return obj\n\n def to_internal_value(self, data):\n try:\n val = json.loads(data)\n except TypeError:\n val = data\n return val\n\n\nclass ModelSerializer(serializers.ModelSerializer):\n\n uuid = serializers.ReadOnlyField()\n\n def get_validators(self):\n \"\"\"\n Hack to remove DRF's UniqueTogetherValidator when it concerns the UUID.\n\n See https://github.com/deis/deis/pull/2898#discussion_r23105147\n \"\"\"\n validators = super(ModelSerializer, self).get_validators()\n for v in validators:\n if isinstance(v, UniqueTogetherValidator) and 'uuid' in v.fields:\n validators.remove(v)\n return validators\n\n\nclass UserSerializer(serializers.ModelSerializer):\n class Meta:\n model = User\n fields = ['email', 'username', 'password', 'first_name', 'last_name', 'is_superuser',\n 'is_staff', 'groups', 'user_permissions', 'last_login', 'date_joined',\n 'is_active']\n read_only_fields = ['is_superuser', 'is_staff', 'groups',\n 'user_permissions', 'last_login', 'date_joined', 'is_active']\n extra_kwargs = {'password': {'write_only': True}}\n\n def create(self, validated_data):\n now = timezone.now()\n user = User(\n email=validated_data.get('email'),\n username=validated_data.get('username'),\n last_login=now,\n date_joined=now,\n is_active=True\n )\n if validated_data.get('first_name'):\n user.first_name = validated_data['first_name']\n if validated_data.get('last_name'):\n user.last_name = validated_data['last_name']\n user.set_password(validated_data['password'])\n # Make the first signup an admin / superuser\n if not User.objects.filter(is_superuser=True).exists():\n user.is_superuser = user.is_staff = True\n user.save()\n return user\n\n\nclass AdminUserSerializer(serializers.ModelSerializer):\n \"\"\"Serialize admin status for a User model.\"\"\"\n\n class Meta:\n model = User\n fields = ['username', 'is_superuser']\n read_only_fields = ['username']\n\n\nclass AppSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.App` model.\"\"\"\n\n owner = serializers.ReadOnlyField(source='owner.username')\n structure = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`AppSerializer`.\"\"\"\n model = models.App\n fields = ['uuid', 'id', 'owner', 'url', 'structure', 'created', 'updated']\n read_only_fields = ['uuid']\n\n\nclass BuildSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Build` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n procfile = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`BuildSerializer`.\"\"\"\n model = models.Build\n fields = ['owner', 'app', 'image', 'sha', 'procfile', 'dockerfile', 'created',\n 'updated', 'uuid']\n read_only_fields = ['uuid']\n\n\nclass ConfigSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Config` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n values = JSONFieldSerializer(required=False)\n memory = JSONFieldSerializer(required=False)\n cpu = JSONFieldSerializer(required=False)\n tags = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`ConfigSerializer`.\"\"\"\n model = models.Config\n\n def validate_memory(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(PROCTYPE_MATCH, k):\n raise serializers.ValidationError(\"Process types can only contain [a-z]\")\n if not re.match(MEMLIMIT_MATCH, str(v)):\n raise serializers.ValidationError(\n \"Limit format: <number><unit>, where unit = B, K, M or G\")\n return value\n\n def validate_cpu(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(PROCTYPE_MATCH, k):\n raise serializers.ValidationError(\"Process types can only contain [a-z]\")\n shares = re.match(CPUSHARE_MATCH, str(v))\n if not shares:\n raise serializers.ValidationError(\"CPU shares must be an integer\")\n for v in shares.groupdict().values():\n try:\n i = int(v)\n except ValueError:\n raise serializers.ValidationError(\"CPU shares must be an integer\")\n if i > 1024 or i < 0:\n raise serializers.ValidationError(\"CPU shares must be between 0 and 1024\")\n return value\n\n def validate_tags(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(TAGKEY_MATCH, k):\n raise serializers.ValidationError(\"Tag keys can only contain [a-z]\")\n if not re.match(TAGVAL_MATCH, str(v)):\n raise serializers.ValidationError(\"Invalid tag value\")\n return value\n\n\nclass ReleaseSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Release` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`ReleaseSerializer`.\"\"\"\n model = models.Release\n\n\nclass ContainerSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Container` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n release = serializers.SerializerMethodField()\n\n class Meta:\n \"\"\"Metadata options for a :class:`ContainerSerializer`.\"\"\"\n model = models.Container\n fields = ['owner', 'app', 'release', 'type', 'num', 'state', 'created', 'updated', 'uuid']\n\n def get_release(self, obj):\n return \"v{}\".format(obj.release.version)\n\n\nclass KeySerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Key` model.\"\"\"\n\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a KeySerializer.\"\"\"\n model = models.Key\n\n\nclass DomainSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Domain` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`DomainSerializer`.\"\"\"\n model = models.Domain\n fields = ['uuid', 'owner', 'created', 'updated', 'app', 'domain']\n\n def validate_domain(self, value):\n \"\"\"\n Check that the hostname is valid\n \"\"\"\n if len(value) > 255:\n raise serializers.ValidationError('Hostname must be 255 characters or less.')\n if value[-1:] == \".\":\n value = value[:-1] # strip exactly one dot from the right, if present\n labels = value.split('.')\n if labels[0] == '*':\n raise serializers.ValidationError(\n 'Adding a wildcard subdomain is currently not supported.')\n allowed = re.compile(\"^(?!-)[a-z0-9-]{1,63}(?<!-)$\", re.IGNORECASE)\n for label in labels:\n match = allowed.match(label)\n if not match or '--' in label or label[-1].isdigit() or label.isdigit():\n raise serializers.ValidationError('Hostname does not look valid.')\n if models.Domain.objects.filter(domain=value).exists():\n raise serializers.ValidationError(\n \"The domain {} is already in use by another app\".format(value))\n return value\n\n\nclass PushSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Push` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`PushSerializer`.\"\"\"\n model = models.Push\n fields = ['uuid', 'owner', 'app', 'sha', 'fingerprint', 'receive_user', 'receive_repo',\n 'ssh_connection', 'ssh_original_command', 'created', 'updated']\n", "path": "controller/api/serializers.py"}], "after_files": [{"content": "\"\"\"\nClasses to serialize the RESTful representation of Deis API models.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport json\nimport re\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.utils import timezone\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom api import models\n\n\nPROCTYPE_MATCH = re.compile(r'^(?P<type>[a-z]+)')\nMEMLIMIT_MATCH = re.compile(r'^(?P<mem>[0-9]+[BbKkMmGg])$')\nCPUSHARE_MATCH = re.compile(r'^(?P<cpu>[0-9]+)$')\nTAGKEY_MATCH = re.compile(r'^[a-z]+$')\nTAGVAL_MATCH = re.compile(r'^\\w+$')\n\n\nclass JSONFieldSerializer(serializers.Field):\n def to_representation(self, obj):\n return obj\n\n def to_internal_value(self, data):\n try:\n val = json.loads(data)\n except TypeError:\n val = data\n return val\n\n\nclass ModelSerializer(serializers.ModelSerializer):\n\n uuid = serializers.ReadOnlyField()\n\n def get_validators(self):\n \"\"\"\n Hack to remove DRF's UniqueTogetherValidator when it concerns the UUID.\n\n See https://github.com/deis/deis/pull/2898#discussion_r23105147\n \"\"\"\n validators = super(ModelSerializer, self).get_validators()\n for v in validators:\n if isinstance(v, UniqueTogetherValidator) and 'uuid' in v.fields:\n validators.remove(v)\n return validators\n\n\nclass UserSerializer(serializers.ModelSerializer):\n class Meta:\n model = User\n fields = ['email', 'username', 'password', 'first_name', 'last_name', 'is_superuser',\n 'is_staff', 'groups', 'user_permissions', 'last_login', 'date_joined',\n 'is_active']\n read_only_fields = ['is_superuser', 'is_staff', 'groups',\n 'user_permissions', 'last_login', 'date_joined', 'is_active']\n extra_kwargs = {'password': {'write_only': True}}\n\n def create(self, validated_data):\n now = timezone.now()\n user = User(\n email=validated_data.get('email'),\n username=validated_data.get('username'),\n last_login=now,\n date_joined=now,\n is_active=True\n )\n if validated_data.get('first_name'):\n user.first_name = validated_data['first_name']\n if validated_data.get('last_name'):\n user.last_name = validated_data['last_name']\n user.set_password(validated_data['password'])\n # Make the first signup an admin / superuser\n if not User.objects.filter(is_superuser=True).exists():\n user.is_superuser = user.is_staff = True\n user.save()\n return user\n\n\nclass AdminUserSerializer(serializers.ModelSerializer):\n \"\"\"Serialize admin status for a User model.\"\"\"\n\n class Meta:\n model = User\n fields = ['username', 'is_superuser']\n read_only_fields = ['username']\n\n\nclass AppSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.App` model.\"\"\"\n\n owner = serializers.ReadOnlyField(source='owner.username')\n structure = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`AppSerializer`.\"\"\"\n model = models.App\n fields = ['uuid', 'id', 'owner', 'url', 'structure', 'created', 'updated']\n read_only_fields = ['uuid']\n\n\nclass BuildSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Build` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n procfile = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`BuildSerializer`.\"\"\"\n model = models.Build\n fields = ['owner', 'app', 'image', 'sha', 'procfile', 'dockerfile', 'created',\n 'updated', 'uuid']\n read_only_fields = ['uuid']\n\n\nclass ConfigSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Config` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n values = JSONFieldSerializer(required=False)\n memory = JSONFieldSerializer(required=False)\n cpu = JSONFieldSerializer(required=False)\n tags = JSONFieldSerializer(required=False)\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`ConfigSerializer`.\"\"\"\n model = models.Config\n\n def validate_memory(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(PROCTYPE_MATCH, k):\n raise serializers.ValidationError(\"Process types can only contain [a-z]\")\n if not re.match(MEMLIMIT_MATCH, str(v)):\n raise serializers.ValidationError(\n \"Limit format: <number><unit>, where unit = B, K, M or G\")\n return value\n\n def validate_cpu(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(PROCTYPE_MATCH, k):\n raise serializers.ValidationError(\"Process types can only contain [a-z]\")\n shares = re.match(CPUSHARE_MATCH, str(v))\n if not shares:\n raise serializers.ValidationError(\"CPU shares must be an integer\")\n for v in shares.groupdict().values():\n try:\n i = int(v)\n except ValueError:\n raise serializers.ValidationError(\"CPU shares must be an integer\")\n if i > 1024 or i < 0:\n raise serializers.ValidationError(\"CPU shares must be between 0 and 1024\")\n return value\n\n def validate_tags(self, value):\n for k, v in value.items():\n if v is None: # use NoneType to unset a value\n continue\n if not re.match(TAGKEY_MATCH, k):\n raise serializers.ValidationError(\"Tag keys can only contain [a-z]\")\n if not re.match(TAGVAL_MATCH, str(v)):\n raise serializers.ValidationError(\"Invalid tag value\")\n return value\n\n\nclass ReleaseSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Release` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`ReleaseSerializer`.\"\"\"\n model = models.Release\n\n\nclass ContainerSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Container` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n release = serializers.SerializerMethodField()\n\n class Meta:\n \"\"\"Metadata options for a :class:`ContainerSerializer`.\"\"\"\n model = models.Container\n fields = ['owner', 'app', 'release', 'type', 'num', 'state', 'created', 'updated', 'uuid']\n\n def get_release(self, obj):\n return \"v{}\".format(obj.release.version)\n\n\nclass KeySerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Key` model.\"\"\"\n\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a KeySerializer.\"\"\"\n model = models.Key\n\n\nclass DomainSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Domain` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`DomainSerializer`.\"\"\"\n model = models.Domain\n fields = ['uuid', 'owner', 'created', 'updated', 'app', 'domain']\n\n def validate_domain(self, value):\n \"\"\"\n Check that the hostname is valid\n \"\"\"\n if len(value) > 255:\n raise serializers.ValidationError('Hostname must be 255 characters or less.')\n if value[-1:] == \".\":\n value = value[:-1] # strip exactly one dot from the right, if present\n labels = value.split('.')\n if labels[0] == '*':\n raise serializers.ValidationError(\n 'Adding a wildcard subdomain is currently not supported.')\n allowed = re.compile(\"^(?!-)[a-z0-9-]{1,63}(?<!-)$\", re.IGNORECASE)\n for label in labels:\n match = allowed.match(label)\n if not match or '--' in label or label.isdigit() or \\\n len(labels) == 1 and any(char.isdigit() for char in label):\n raise serializers.ValidationError('Hostname does not look valid.')\n if models.Domain.objects.filter(domain=value).exists():\n raise serializers.ValidationError(\n \"The domain {} is already in use by another app\".format(value))\n return value\n\n\nclass PushSerializer(ModelSerializer):\n \"\"\"Serialize a :class:`~api.models.Push` model.\"\"\"\n\n app = serializers.SlugRelatedField(slug_field='id', queryset=models.App.objects.all())\n owner = serializers.ReadOnlyField(source='owner.username')\n created = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n updated = serializers.DateTimeField(format=settings.DEIS_DATETIME_FORMAT, read_only=True)\n\n class Meta:\n \"\"\"Metadata options for a :class:`PushSerializer`.\"\"\"\n model = models.Push\n fields = ['uuid', 'owner', 'app', 'sha', 'fingerprint', 'receive_user', 'receive_repo',\n 'ssh_connection', 'ssh_original_command', 'created', 'updated']\n", "path": "controller/api/serializers.py"}]} | 3,358 | 176 |
gh_patches_debug_43011 | rasdani/github-patches | git_diff | nautobot__nautobot-1230 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some GraphQL Queries for Relationships fail on 1.2
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.8
* Nautobot version: 1.2.2
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Have a relationship e.g. between cluster and devices (one to many)
2. Run a graphql query like:
```{
device(id:"bb91df1c-61ec-4a85-8138-cdb97c6c890e")
{
name
rel_cluster_to_device{
id
}
}
}
```
<!-- What did you expect to happen? -->
### Expected Behavior
Get the ID of the relationship
<!-- What happened instead? -->
### Observed Behavior
```
{
"errors": [
{
"message": "Cannot call only() after .values() or .values_list()",
"locations": [
{
"line": 5,
"column": 3
}
],
"path": [
"device",
"rel_cluster_to_device"
]
}
],
"data": {
"device": {
"name": "R10-S3",
"rel_cluster_to_device": null
}
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/core/graphql/generators.py`
Content:
```
1 """Library of generators for GraphQL."""
2
3 import logging
4
5 import graphene
6 import graphene_django_optimizer as gql_optimizer
7 from graphql import GraphQLError
8 from graphene_django import DjangoObjectType
9
10 from nautobot.core.graphql.utils import str_to_var_name, get_filtering_args_from_filterset
11 from nautobot.extras.choices import RelationshipSideChoices
12 from nautobot.extras.models import RelationshipAssociation
13 from nautobot.utilities.utils import get_filterset_for_model
14
15 logger = logging.getLogger("nautobot.graphql.generators")
16 RESOLVER_PREFIX = "resolve_"
17
18
19 def generate_restricted_queryset():
20 """
21 Generate a function to return a restricted queryset compatible with the internal permissions system.
22
23 Note that for built-in models such as ContentType the queryset has no `restrict` method, so we have to
24 fail gracefully in that case.
25 """
26
27 def get_queryset(queryset, info):
28 if not hasattr(queryset, "restrict"):
29 logger.debug(f"Queryset {queryset} is not restrictable")
30 return queryset
31 return queryset.restrict(info.context.user, "view")
32
33 return get_queryset
34
35
36 def generate_null_choices_resolver(name, resolver_name):
37 """
38 Generate function to resolve appropriate type when a field has `null=False` (default), `blank=True`, and
39 `choices` defined.
40
41 Args:
42 name (str): name of the field to resolve
43 resolver_name (str): name of the resolver as declare in DjangoObjectType
44 """
45
46 def resolve_fields_w_choices(model, info, **kwargs):
47 field_value = getattr(model, name)
48 if field_value:
49 return field_value
50 return None
51
52 resolve_fields_w_choices.__name__ = resolver_name
53 return resolve_fields_w_choices
54
55
56 def generate_filter_resolver(schema_type, resolver_name, field_name):
57 """
58 Generate function to resolve OneToMany filtering.
59
60 Args:
61 schema_type (DjangoObjectType): DjangoObjectType for a given model
62 resolver_name (str): name of the resolver
63 field_name (str): name of OneToMany field to filter
64 """
65 filterset_class = schema_type._meta.filterset_class
66
67 def resolve_filter(self, *args, **kwargs):
68 if not filterset_class:
69 return getattr(self, field_name).all()
70
71 resolved_obj = filterset_class(kwargs, getattr(self, field_name).all())
72
73 # Check result filter for errors.
74 if not resolved_obj.errors:
75 return resolved_obj.qs.all()
76
77 errors = {}
78
79 # Build error message from results
80 # Error messages are collected from each filter object
81 for key in resolved_obj.errors:
82 errors[key] = resolved_obj.errors[key]
83
84 # Raising this exception will send the error message in the response of the GraphQL request
85 raise GraphQLError(errors)
86
87 resolve_filter.__name__ = resolver_name
88 return resolve_filter
89
90
91 def generate_custom_field_resolver(name, resolver_name):
92 """Generate function to resolve each custom field within each DjangoObjectType.
93
94 Args:
95 name (str): name of the custom field to resolve
96 resolver_name (str): name of the resolver as declare in DjangoObjectType
97 """
98
99 def resolve_custom_field(self, info, **kwargs):
100 return self.cf.get(name, None)
101
102 resolve_custom_field.__name__ = resolver_name
103 return resolve_custom_field
104
105
106 def generate_computed_field_resolver(name, resolver_name):
107 """Generate an instance method for resolving an individual computed field within a given DjangoObjectType.
108
109 Args:
110 name (str): name of the computed field to resolve
111 resolver_name (str): name of the resolver as declare in DjangoObjectType
112 """
113
114 def resolve_computed_field(self, info, **kwargs):
115 return self.get_computed_field(slug=name)
116
117 resolve_computed_field.__name__ = resolver_name
118 return resolve_computed_field
119
120
121 def generate_relationship_resolver(name, resolver_name, relationship, side, peer_model):
122 """Generate function to resolve each custom relationship within each DjangoObjectType.
123
124 Args:
125 name (str): name of the custom field to resolve
126 resolver_name (str): name of the resolver as declare in DjangoObjectType
127 relationship (Relationship): Relationship object to generate a resolver for
128 side (str): side of the relationship to use for the resolver
129 peer_model (Model): Django Model of the peer of this relationship
130 """
131
132 def resolve_relationship(self, info, **kwargs):
133 """Return a queryset or an object depending on the type of the relationship."""
134 peer_side = RelationshipSideChoices.OPPOSITE[side]
135 query_params = {"relationship": relationship}
136 if not relationship.symmetric:
137 # Get the objects on the other side of this relationship
138 query_params[f"{side}_id"] = self.pk
139 queryset_ids = gql_optimizer.query(
140 RelationshipAssociation.objects.filter(**query_params).values_list(f"{peer_side}_id", flat=True), info
141 )
142 else:
143 # Get objects that are peers for this relationship, regardless of side
144 queryset_ids = list(
145 gql_optimizer.query(
146 RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(
147 "destination_id", flat=True
148 ),
149 info,
150 )
151 )
152 queryset_ids += list(
153 gql_optimizer.query(
154 RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(
155 "source_id", flat=True
156 ),
157 info,
158 )
159 )
160
161 if relationship.has_many(peer_side):
162 return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids), info)
163
164 return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)
165
166 resolve_relationship.__name__ = resolver_name
167 return resolve_relationship
168
169
170 def generate_schema_type(app_name: str, model: object) -> DjangoObjectType:
171 """
172 Take a Django model and generate a Graphene Type class definition.
173
174 Args:
175 app_name (str): name of the application or plugin the Model is part of.
176 model (object): Django Model
177
178 Example:
179 For a model with a name of "Device", the following class definition is generated:
180
181 class DeviceType(DjangoObjectType):
182 Meta:
183 model = Device
184 fields = ["__all__"]
185
186 If a FilterSet exists for this model at
187 '<app_name>.filters.<ModelName>FilterSet' the filterset will be stored in
188 filterset_class as follows:
189
190 class DeviceType(DjangoObjectType):
191 Meta:
192 model = Device
193 fields = ["__all__"]
194 filterset_class = DeviceFilterSet
195 """
196
197 main_attrs = {}
198 meta_attrs = {"model": model, "fields": "__all__"}
199
200 # We'll attempt to find a FilterSet corresponding to the model
201 # Not all models have a FilterSet defined so the function return none if it can't find a filterset
202 meta_attrs["filterset_class"] = get_filterset_for_model(model)
203
204 main_attrs["Meta"] = type("Meta", (object,), meta_attrs)
205
206 schema_type = type(f"{model.__name__}Type", (DjangoObjectType,), main_attrs)
207 return schema_type
208
209
210 def generate_list_search_parameters(schema_type):
211 """Generate list of query parameters for the list resolver based on a filterset."""
212
213 search_params = {}
214 if schema_type._meta.filterset_class is not None:
215 search_params = get_filtering_args_from_filterset(
216 schema_type._meta.filterset_class,
217 )
218
219 return search_params
220
221
222 def generate_single_item_resolver(schema_type, resolver_name):
223 """Generate a resolver for a single element of schema_type
224
225 Args:
226 schema_type (DjangoObjectType): DjangoObjectType for a given model
227 resolver_name (str): name of the resolver
228
229 Returns:
230 callable: Resolver function for a single element
231 """
232 model = schema_type._meta.model
233
234 def single_resolver(self, info, **kwargs):
235
236 obj_id = kwargs.get("id", None)
237 if obj_id:
238 return gql_optimizer.query(
239 model.objects.restrict(info.context.user, "view").filter(pk=obj_id), info
240 ).first()
241 return None
242
243 single_resolver.__name__ = resolver_name
244 return single_resolver
245
246
247 def generate_list_resolver(schema_type, resolver_name):
248 """
249 Generate resolver for a list of schema_type.
250
251 If a filterset_class is associated with the schema_type,
252 the resolver will pass all arguments received to the FilterSet
253 If not, it will return a restricted queryset for all objects
254
255 Args:
256 schema_type (DjangoObjectType): DjangoObjectType for a given model
257 resolver_name (str): name of the resolver
258
259 Returns:
260 callable: Resolver function for list of element
261 """
262 model = schema_type._meta.model
263
264 def list_resolver(self, info, **kwargs):
265 filterset_class = schema_type._meta.filterset_class
266 if filterset_class is not None:
267 resolved_obj = filterset_class(kwargs, model.objects.restrict(info.context.user, "view").all())
268
269 # Check result filter for errors.
270 if resolved_obj.errors:
271 errors = {}
272
273 # Build error message from results
274 # Error messages are collected from each filter object
275 for key in resolved_obj.errors:
276 errors[key] = resolved_obj.errors[key]
277
278 # Raising this exception will send the error message in the response of the GraphQL request
279 raise GraphQLError(errors)
280
281 return gql_optimizer.query(resolved_obj.qs.all(), info)
282
283 return gql_optimizer.query(model.objects.restrict(info.context.user, "view").all(), info)
284
285 list_resolver.__name__ = resolver_name
286 return list_resolver
287
288
289 def generate_attrs_for_schema_type(schema_type):
290 """Generate both attributes and resolvers for a given schema_type.
291
292 Args:
293 schema_type (DjangoObjectType): DjangoObjectType for a given model
294
295 Returns:
296 dict: Dict of attributes ready to merge into the QueryMixin class
297 """
298 attrs = {}
299 model = schema_type._meta.model
300
301 single_item_name = str_to_var_name(model._meta.verbose_name)
302 list_name = str_to_var_name(model._meta.verbose_name_plural)
303
304 # Define Attributes for single item and list with their search parameters
305 search_params = generate_list_search_parameters(schema_type)
306 attrs[single_item_name] = graphene.Field(schema_type, id=graphene.ID())
307 attrs[list_name] = graphene.List(schema_type, **search_params)
308
309 # Define Resolvers for both single item and list
310 single_item_resolver_name = f"{RESOLVER_PREFIX}{single_item_name}"
311 list_resolver_name = f"{RESOLVER_PREFIX}{list_name}"
312 attrs[single_item_resolver_name] = generate_single_item_resolver(schema_type, single_item_resolver_name)
313 attrs[list_resolver_name] = generate_list_resolver(schema_type, list_resolver_name)
314
315 return attrs
316
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/core/graphql/generators.py b/nautobot/core/graphql/generators.py
--- a/nautobot/core/graphql/generators.py
+++ b/nautobot/core/graphql/generators.py
@@ -133,35 +133,69 @@
"""Return a queryset or an object depending on the type of the relationship."""
peer_side = RelationshipSideChoices.OPPOSITE[side]
query_params = {"relationship": relationship}
+ # https://github.com/nautobot/nautobot/issues/1228
+ # If querying for **only** the ID of the related object, for example:
+ # { device(id:"...") { ... rel_my_relationship { id } } }
+ # we will get this exception:
+ # TypeError: Cannot call select_related() after .values() or .values_list()
+ # This appears to be a bug in graphene_django_optimizer but I haven't found a known issue on GitHub.
+ # For now we just work around it by catching the exception and retrying without optimization, below...
if not relationship.symmetric:
# Get the objects on the other side of this relationship
query_params[f"{side}_id"] = self.pk
- queryset_ids = gql_optimizer.query(
- RelationshipAssociation.objects.filter(**query_params).values_list(f"{peer_side}_id", flat=True), info
- )
+
+ try:
+ queryset_ids = gql_optimizer.query(
+ RelationshipAssociation.objects.filter(**query_params).values_list(f"{peer_side}_id", flat=True),
+ info,
+ )
+ except TypeError:
+ logger.debug("Caught TypeError in graphene_django_optimizer, falling back to un-optimized query")
+ queryset_ids = RelationshipAssociation.objects.filter(**query_params).values_list(
+ f"{peer_side}_id", flat=True
+ )
else:
# Get objects that are peers for this relationship, regardless of side
- queryset_ids = list(
- gql_optimizer.query(
+ try:
+ queryset_ids = list(
+ gql_optimizer.query(
+ RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(
+ "destination_id", flat=True
+ ),
+ info,
+ )
+ )
+ queryset_ids += list(
+ gql_optimizer.query(
+ RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(
+ "source_id", flat=True
+ ),
+ info,
+ )
+ )
+ except TypeError:
+ logger.debug("Caught TypeError in graphene_django_optimizer, falling back to un-optimized query")
+ queryset_ids = list(
RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(
"destination_id", flat=True
),
- info,
)
- )
- queryset_ids += list(
- gql_optimizer.query(
+ queryset_ids += list(
RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(
"source_id", flat=True
),
- info,
)
- )
if relationship.has_many(peer_side):
return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids), info)
- return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)
+ # Also apparently a graphene_django_optimizer bug - in the same query case as described above, here we may see:
+ # AttributeError: object has no attribute "only"
+ try:
+ return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)
+ except AttributeError:
+ logger.debug("Caught AttributeError in graphene_django_optimizer, falling back to un-optimized query")
+ return peer_model.objects.filter(id__in=queryset_ids).first()
resolve_relationship.__name__ = resolver_name
return resolve_relationship
| {"golden_diff": "diff --git a/nautobot/core/graphql/generators.py b/nautobot/core/graphql/generators.py\n--- a/nautobot/core/graphql/generators.py\n+++ b/nautobot/core/graphql/generators.py\n@@ -133,35 +133,69 @@\n \"\"\"Return a queryset or an object depending on the type of the relationship.\"\"\"\n peer_side = RelationshipSideChoices.OPPOSITE[side]\n query_params = {\"relationship\": relationship}\n+ # https://github.com/nautobot/nautobot/issues/1228\n+ # If querying for **only** the ID of the related object, for example:\n+ # { device(id:\"...\") { ... rel_my_relationship { id } } }\n+ # we will get this exception:\n+ # TypeError: Cannot call select_related() after .values() or .values_list()\n+ # This appears to be a bug in graphene_django_optimizer but I haven't found a known issue on GitHub.\n+ # For now we just work around it by catching the exception and retrying without optimization, below...\n if not relationship.symmetric:\n # Get the objects on the other side of this relationship\n query_params[f\"{side}_id\"] = self.pk\n- queryset_ids = gql_optimizer.query(\n- RelationshipAssociation.objects.filter(**query_params).values_list(f\"{peer_side}_id\", flat=True), info\n- )\n+\n+ try:\n+ queryset_ids = gql_optimizer.query(\n+ RelationshipAssociation.objects.filter(**query_params).values_list(f\"{peer_side}_id\", flat=True),\n+ info,\n+ )\n+ except TypeError:\n+ logger.debug(\"Caught TypeError in graphene_django_optimizer, falling back to un-optimized query\")\n+ queryset_ids = RelationshipAssociation.objects.filter(**query_params).values_list(\n+ f\"{peer_side}_id\", flat=True\n+ )\n else:\n # Get objects that are peers for this relationship, regardless of side\n- queryset_ids = list(\n- gql_optimizer.query(\n+ try:\n+ queryset_ids = list(\n+ gql_optimizer.query(\n+ RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(\n+ \"destination_id\", flat=True\n+ ),\n+ info,\n+ )\n+ )\n+ queryset_ids += list(\n+ gql_optimizer.query(\n+ RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(\n+ \"source_id\", flat=True\n+ ),\n+ info,\n+ )\n+ )\n+ except TypeError:\n+ logger.debug(\"Caught TypeError in graphene_django_optimizer, falling back to un-optimized query\")\n+ queryset_ids = list(\n RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(\n \"destination_id\", flat=True\n ),\n- info,\n )\n- )\n- queryset_ids += list(\n- gql_optimizer.query(\n+ queryset_ids += list(\n RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(\n \"source_id\", flat=True\n ),\n- info,\n )\n- )\n \n if relationship.has_many(peer_side):\n return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids), info)\n \n- return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)\n+ # Also apparently a graphene_django_optimizer bug - in the same query case as described above, here we may see:\n+ # AttributeError: object has no attribute \"only\"\n+ try:\n+ return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)\n+ except AttributeError:\n+ logger.debug(\"Caught AttributeError in graphene_django_optimizer, falling back to un-optimized query\")\n+ return peer_model.objects.filter(id__in=queryset_ids).first()\n \n resolve_relationship.__name__ = resolver_name\n return resolve_relationship\n", "issue": "Some GraphQL Queries for Relationships fail on 1.2\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.8\r\n* Nautobot version: 1.2.2\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Have a relationship e.g. between cluster and devices (one to many)\r\n2. Run a graphql query like: \r\n```{\r\ndevice(id:\"bb91df1c-61ec-4a85-8138-cdb97c6c890e\")\r\n{\r\n name\r\n rel_cluster_to_device{\r\n id\r\n }\r\n }\r\n}\r\n```\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nGet the ID of the relationship\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n```\r\n{\r\n \"errors\": [\r\n {\r\n \"message\": \"Cannot call only() after .values() or .values_list()\",\r\n \"locations\": [\r\n {\r\n \"line\": 5,\r\n \"column\": 3\r\n }\r\n ],\r\n \"path\": [\r\n \"device\",\r\n \"rel_cluster_to_device\"\r\n ]\r\n }\r\n ],\r\n \"data\": {\r\n \"device\": {\r\n \"name\": \"R10-S3\",\r\n \"rel_cluster_to_device\": null\r\n }\r\n }\r\n}\r\n```\n", "before_files": [{"content": "\"\"\"Library of generators for GraphQL.\"\"\"\n\nimport logging\n\nimport graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom graphql import GraphQLError\nfrom graphene_django import DjangoObjectType\n\nfrom nautobot.core.graphql.utils import str_to_var_name, get_filtering_args_from_filterset\nfrom nautobot.extras.choices import RelationshipSideChoices\nfrom nautobot.extras.models import RelationshipAssociation\nfrom nautobot.utilities.utils import get_filterset_for_model\n\nlogger = logging.getLogger(\"nautobot.graphql.generators\")\nRESOLVER_PREFIX = \"resolve_\"\n\n\ndef generate_restricted_queryset():\n \"\"\"\n Generate a function to return a restricted queryset compatible with the internal permissions system.\n\n Note that for built-in models such as ContentType the queryset has no `restrict` method, so we have to\n fail gracefully in that case.\n \"\"\"\n\n def get_queryset(queryset, info):\n if not hasattr(queryset, \"restrict\"):\n logger.debug(f\"Queryset {queryset} is not restrictable\")\n return queryset\n return queryset.restrict(info.context.user, \"view\")\n\n return get_queryset\n\n\ndef generate_null_choices_resolver(name, resolver_name):\n \"\"\"\n Generate function to resolve appropriate type when a field has `null=False` (default), `blank=True`, and\n `choices` defined.\n\n Args:\n name (str): name of the field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_fields_w_choices(model, info, **kwargs):\n field_value = getattr(model, name)\n if field_value:\n return field_value\n return None\n\n resolve_fields_w_choices.__name__ = resolver_name\n return resolve_fields_w_choices\n\n\ndef generate_filter_resolver(schema_type, resolver_name, field_name):\n \"\"\"\n Generate function to resolve OneToMany filtering.\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n field_name (str): name of OneToMany field to filter\n \"\"\"\n filterset_class = schema_type._meta.filterset_class\n\n def resolve_filter(self, *args, **kwargs):\n if not filterset_class:\n return getattr(self, field_name).all()\n\n resolved_obj = filterset_class(kwargs, getattr(self, field_name).all())\n\n # Check result filter for errors.\n if not resolved_obj.errors:\n return resolved_obj.qs.all()\n\n errors = {}\n\n # Build error message from results\n # Error messages are collected from each filter object\n for key in resolved_obj.errors:\n errors[key] = resolved_obj.errors[key]\n\n # Raising this exception will send the error message in the response of the GraphQL request\n raise GraphQLError(errors)\n\n resolve_filter.__name__ = resolver_name\n return resolve_filter\n\n\ndef generate_custom_field_resolver(name, resolver_name):\n \"\"\"Generate function to resolve each custom field within each DjangoObjectType.\n\n Args:\n name (str): name of the custom field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_custom_field(self, info, **kwargs):\n return self.cf.get(name, None)\n\n resolve_custom_field.__name__ = resolver_name\n return resolve_custom_field\n\n\ndef generate_computed_field_resolver(name, resolver_name):\n \"\"\"Generate an instance method for resolving an individual computed field within a given DjangoObjectType.\n\n Args:\n name (str): name of the computed field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_computed_field(self, info, **kwargs):\n return self.get_computed_field(slug=name)\n\n resolve_computed_field.__name__ = resolver_name\n return resolve_computed_field\n\n\ndef generate_relationship_resolver(name, resolver_name, relationship, side, peer_model):\n \"\"\"Generate function to resolve each custom relationship within each DjangoObjectType.\n\n Args:\n name (str): name of the custom field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n relationship (Relationship): Relationship object to generate a resolver for\n side (str): side of the relationship to use for the resolver\n peer_model (Model): Django Model of the peer of this relationship\n \"\"\"\n\n def resolve_relationship(self, info, **kwargs):\n \"\"\"Return a queryset or an object depending on the type of the relationship.\"\"\"\n peer_side = RelationshipSideChoices.OPPOSITE[side]\n query_params = {\"relationship\": relationship}\n if not relationship.symmetric:\n # Get the objects on the other side of this relationship\n query_params[f\"{side}_id\"] = self.pk\n queryset_ids = gql_optimizer.query(\n RelationshipAssociation.objects.filter(**query_params).values_list(f\"{peer_side}_id\", flat=True), info\n )\n else:\n # Get objects that are peers for this relationship, regardless of side\n queryset_ids = list(\n gql_optimizer.query(\n RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(\n \"destination_id\", flat=True\n ),\n info,\n )\n )\n queryset_ids += list(\n gql_optimizer.query(\n RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(\n \"source_id\", flat=True\n ),\n info,\n )\n )\n\n if relationship.has_many(peer_side):\n return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids), info)\n\n return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)\n\n resolve_relationship.__name__ = resolver_name\n return resolve_relationship\n\n\ndef generate_schema_type(app_name: str, model: object) -> DjangoObjectType:\n \"\"\"\n Take a Django model and generate a Graphene Type class definition.\n\n Args:\n app_name (str): name of the application or plugin the Model is part of.\n model (object): Django Model\n\n Example:\n For a model with a name of \"Device\", the following class definition is generated:\n\n class DeviceType(DjangoObjectType):\n Meta:\n model = Device\n fields = [\"__all__\"]\n\n If a FilterSet exists for this model at\n '<app_name>.filters.<ModelName>FilterSet' the filterset will be stored in\n filterset_class as follows:\n\n class DeviceType(DjangoObjectType):\n Meta:\n model = Device\n fields = [\"__all__\"]\n filterset_class = DeviceFilterSet\n \"\"\"\n\n main_attrs = {}\n meta_attrs = {\"model\": model, \"fields\": \"__all__\"}\n\n # We'll attempt to find a FilterSet corresponding to the model\n # Not all models have a FilterSet defined so the function return none if it can't find a filterset\n meta_attrs[\"filterset_class\"] = get_filterset_for_model(model)\n\n main_attrs[\"Meta\"] = type(\"Meta\", (object,), meta_attrs)\n\n schema_type = type(f\"{model.__name__}Type\", (DjangoObjectType,), main_attrs)\n return schema_type\n\n\ndef generate_list_search_parameters(schema_type):\n \"\"\"Generate list of query parameters for the list resolver based on a filterset.\"\"\"\n\n search_params = {}\n if schema_type._meta.filterset_class is not None:\n search_params = get_filtering_args_from_filterset(\n schema_type._meta.filterset_class,\n )\n\n return search_params\n\n\ndef generate_single_item_resolver(schema_type, resolver_name):\n \"\"\"Generate a resolver for a single element of schema_type\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n\n Returns:\n callable: Resolver function for a single element\n \"\"\"\n model = schema_type._meta.model\n\n def single_resolver(self, info, **kwargs):\n\n obj_id = kwargs.get(\"id\", None)\n if obj_id:\n return gql_optimizer.query(\n model.objects.restrict(info.context.user, \"view\").filter(pk=obj_id), info\n ).first()\n return None\n\n single_resolver.__name__ = resolver_name\n return single_resolver\n\n\ndef generate_list_resolver(schema_type, resolver_name):\n \"\"\"\n Generate resolver for a list of schema_type.\n\n If a filterset_class is associated with the schema_type,\n the resolver will pass all arguments received to the FilterSet\n If not, it will return a restricted queryset for all objects\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n\n Returns:\n callable: Resolver function for list of element\n \"\"\"\n model = schema_type._meta.model\n\n def list_resolver(self, info, **kwargs):\n filterset_class = schema_type._meta.filterset_class\n if filterset_class is not None:\n resolved_obj = filterset_class(kwargs, model.objects.restrict(info.context.user, \"view\").all())\n\n # Check result filter for errors.\n if resolved_obj.errors:\n errors = {}\n\n # Build error message from results\n # Error messages are collected from each filter object\n for key in resolved_obj.errors:\n errors[key] = resolved_obj.errors[key]\n\n # Raising this exception will send the error message in the response of the GraphQL request\n raise GraphQLError(errors)\n\n return gql_optimizer.query(resolved_obj.qs.all(), info)\n\n return gql_optimizer.query(model.objects.restrict(info.context.user, \"view\").all(), info)\n\n list_resolver.__name__ = resolver_name\n return list_resolver\n\n\ndef generate_attrs_for_schema_type(schema_type):\n \"\"\"Generate both attributes and resolvers for a given schema_type.\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n\n Returns:\n dict: Dict of attributes ready to merge into the QueryMixin class\n \"\"\"\n attrs = {}\n model = schema_type._meta.model\n\n single_item_name = str_to_var_name(model._meta.verbose_name)\n list_name = str_to_var_name(model._meta.verbose_name_plural)\n\n # Define Attributes for single item and list with their search parameters\n search_params = generate_list_search_parameters(schema_type)\n attrs[single_item_name] = graphene.Field(schema_type, id=graphene.ID())\n attrs[list_name] = graphene.List(schema_type, **search_params)\n\n # Define Resolvers for both single item and list\n single_item_resolver_name = f\"{RESOLVER_PREFIX}{single_item_name}\"\n list_resolver_name = f\"{RESOLVER_PREFIX}{list_name}\"\n attrs[single_item_resolver_name] = generate_single_item_resolver(schema_type, single_item_resolver_name)\n attrs[list_resolver_name] = generate_list_resolver(schema_type, list_resolver_name)\n\n return attrs\n", "path": "nautobot/core/graphql/generators.py"}], "after_files": [{"content": "\"\"\"Library of generators for GraphQL.\"\"\"\n\nimport logging\n\nimport graphene\nimport graphene_django_optimizer as gql_optimizer\nfrom graphql import GraphQLError\nfrom graphene_django import DjangoObjectType\n\nfrom nautobot.core.graphql.utils import str_to_var_name, get_filtering_args_from_filterset\nfrom nautobot.extras.choices import RelationshipSideChoices\nfrom nautobot.extras.models import RelationshipAssociation\nfrom nautobot.utilities.utils import get_filterset_for_model\n\nlogger = logging.getLogger(\"nautobot.graphql.generators\")\nRESOLVER_PREFIX = \"resolve_\"\n\n\ndef generate_restricted_queryset():\n \"\"\"\n Generate a function to return a restricted queryset compatible with the internal permissions system.\n\n Note that for built-in models such as ContentType the queryset has no `restrict` method, so we have to\n fail gracefully in that case.\n \"\"\"\n\n def get_queryset(queryset, info):\n if not hasattr(queryset, \"restrict\"):\n logger.debug(f\"Queryset {queryset} is not restrictable\")\n return queryset\n return queryset.restrict(info.context.user, \"view\")\n\n return get_queryset\n\n\ndef generate_null_choices_resolver(name, resolver_name):\n \"\"\"\n Generate function to resolve appropriate type when a field has `null=False` (default), `blank=True`, and\n `choices` defined.\n\n Args:\n name (str): name of the field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_fields_w_choices(model, info, **kwargs):\n field_value = getattr(model, name)\n if field_value:\n return field_value\n return None\n\n resolve_fields_w_choices.__name__ = resolver_name\n return resolve_fields_w_choices\n\n\ndef generate_filter_resolver(schema_type, resolver_name, field_name):\n \"\"\"\n Generate function to resolve OneToMany filtering.\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n field_name (str): name of OneToMany field to filter\n \"\"\"\n filterset_class = schema_type._meta.filterset_class\n\n def resolve_filter(self, *args, **kwargs):\n if not filterset_class:\n return getattr(self, field_name).all()\n\n resolved_obj = filterset_class(kwargs, getattr(self, field_name).all())\n\n # Check result filter for errors.\n if not resolved_obj.errors:\n return resolved_obj.qs.all()\n\n errors = {}\n\n # Build error message from results\n # Error messages are collected from each filter object\n for key in resolved_obj.errors:\n errors[key] = resolved_obj.errors[key]\n\n # Raising this exception will send the error message in the response of the GraphQL request\n raise GraphQLError(errors)\n\n resolve_filter.__name__ = resolver_name\n return resolve_filter\n\n\ndef generate_custom_field_resolver(name, resolver_name):\n \"\"\"Generate function to resolve each custom field within each DjangoObjectType.\n\n Args:\n name (str): name of the custom field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_custom_field(self, info, **kwargs):\n return self.cf.get(name, None)\n\n resolve_custom_field.__name__ = resolver_name\n return resolve_custom_field\n\n\ndef generate_computed_field_resolver(name, resolver_name):\n \"\"\"Generate an instance method for resolving an individual computed field within a given DjangoObjectType.\n\n Args:\n name (str): name of the computed field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n \"\"\"\n\n def resolve_computed_field(self, info, **kwargs):\n return self.get_computed_field(slug=name)\n\n resolve_computed_field.__name__ = resolver_name\n return resolve_computed_field\n\n\ndef generate_relationship_resolver(name, resolver_name, relationship, side, peer_model):\n \"\"\"Generate function to resolve each custom relationship within each DjangoObjectType.\n\n Args:\n name (str): name of the custom field to resolve\n resolver_name (str): name of the resolver as declare in DjangoObjectType\n relationship (Relationship): Relationship object to generate a resolver for\n side (str): side of the relationship to use for the resolver\n peer_model (Model): Django Model of the peer of this relationship\n \"\"\"\n\n def resolve_relationship(self, info, **kwargs):\n \"\"\"Return a queryset or an object depending on the type of the relationship.\"\"\"\n peer_side = RelationshipSideChoices.OPPOSITE[side]\n query_params = {\"relationship\": relationship}\n # https://github.com/nautobot/nautobot/issues/1228\n # If querying for **only** the ID of the related object, for example:\n # { device(id:\"...\") { ... rel_my_relationship { id } } }\n # we will get this exception:\n # TypeError: Cannot call select_related() after .values() or .values_list()\n # This appears to be a bug in graphene_django_optimizer but I haven't found a known issue on GitHub.\n # For now we just work around it by catching the exception and retrying without optimization, below...\n if not relationship.symmetric:\n # Get the objects on the other side of this relationship\n query_params[f\"{side}_id\"] = self.pk\n\n try:\n queryset_ids = gql_optimizer.query(\n RelationshipAssociation.objects.filter(**query_params).values_list(f\"{peer_side}_id\", flat=True),\n info,\n )\n except TypeError:\n logger.debug(\"Caught TypeError in graphene_django_optimizer, falling back to un-optimized query\")\n queryset_ids = RelationshipAssociation.objects.filter(**query_params).values_list(\n f\"{peer_side}_id\", flat=True\n )\n else:\n # Get objects that are peers for this relationship, regardless of side\n try:\n queryset_ids = list(\n gql_optimizer.query(\n RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(\n \"destination_id\", flat=True\n ),\n info,\n )\n )\n queryset_ids += list(\n gql_optimizer.query(\n RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(\n \"source_id\", flat=True\n ),\n info,\n )\n )\n except TypeError:\n logger.debug(\"Caught TypeError in graphene_django_optimizer, falling back to un-optimized query\")\n queryset_ids = list(\n RelationshipAssociation.objects.filter(source_id=self.pk, **query_params).values_list(\n \"destination_id\", flat=True\n ),\n )\n queryset_ids += list(\n RelationshipAssociation.objects.filter(destination_id=self.pk, **query_params).values_list(\n \"source_id\", flat=True\n ),\n )\n\n if relationship.has_many(peer_side):\n return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids), info)\n\n # Also apparently a graphene_django_optimizer bug - in the same query case as described above, here we may see:\n # AttributeError: object has no attribute \"only\"\n try:\n return gql_optimizer.query(peer_model.objects.filter(id__in=queryset_ids).first(), info)\n except AttributeError:\n logger.debug(\"Caught AttributeError in graphene_django_optimizer, falling back to un-optimized query\")\n return peer_model.objects.filter(id__in=queryset_ids).first()\n\n resolve_relationship.__name__ = resolver_name\n return resolve_relationship\n\n\ndef generate_schema_type(app_name: str, model: object) -> DjangoObjectType:\n \"\"\"\n Take a Django model and generate a Graphene Type class definition.\n\n Args:\n app_name (str): name of the application or plugin the Model is part of.\n model (object): Django Model\n\n Example:\n For a model with a name of \"Device\", the following class definition is generated:\n\n class DeviceType(DjangoObjectType):\n Meta:\n model = Device\n fields = [\"__all__\"]\n\n If a FilterSet exists for this model at\n '<app_name>.filters.<ModelName>FilterSet' the filterset will be stored in\n filterset_class as follows:\n\n class DeviceType(DjangoObjectType):\n Meta:\n model = Device\n fields = [\"__all__\"]\n filterset_class = DeviceFilterSet\n \"\"\"\n\n main_attrs = {}\n meta_attrs = {\"model\": model, \"fields\": \"__all__\"}\n\n # We'll attempt to find a FilterSet corresponding to the model\n # Not all models have a FilterSet defined so the function return none if it can't find a filterset\n meta_attrs[\"filterset_class\"] = get_filterset_for_model(model)\n\n main_attrs[\"Meta\"] = type(\"Meta\", (object,), meta_attrs)\n\n schema_type = type(f\"{model.__name__}Type\", (DjangoObjectType,), main_attrs)\n return schema_type\n\n\ndef generate_list_search_parameters(schema_type):\n \"\"\"Generate list of query parameters for the list resolver based on a filterset.\"\"\"\n\n search_params = {}\n if schema_type._meta.filterset_class is not None:\n search_params = get_filtering_args_from_filterset(\n schema_type._meta.filterset_class,\n )\n\n return search_params\n\n\ndef generate_single_item_resolver(schema_type, resolver_name):\n \"\"\"Generate a resolver for a single element of schema_type\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n\n Returns:\n callable: Resolver function for a single element\n \"\"\"\n model = schema_type._meta.model\n\n def single_resolver(self, info, **kwargs):\n\n obj_id = kwargs.get(\"id\", None)\n if obj_id:\n return gql_optimizer.query(\n model.objects.restrict(info.context.user, \"view\").filter(pk=obj_id), info\n ).first()\n return None\n\n single_resolver.__name__ = resolver_name\n return single_resolver\n\n\ndef generate_list_resolver(schema_type, resolver_name):\n \"\"\"\n Generate resolver for a list of schema_type.\n\n If a filterset_class is associated with the schema_type,\n the resolver will pass all arguments received to the FilterSet\n If not, it will return a restricted queryset for all objects\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n resolver_name (str): name of the resolver\n\n Returns:\n callable: Resolver function for list of element\n \"\"\"\n model = schema_type._meta.model\n\n def list_resolver(self, info, **kwargs):\n filterset_class = schema_type._meta.filterset_class\n if filterset_class is not None:\n resolved_obj = filterset_class(kwargs, model.objects.restrict(info.context.user, \"view\").all())\n\n # Check result filter for errors.\n if resolved_obj.errors:\n errors = {}\n\n # Build error message from results\n # Error messages are collected from each filter object\n for key in resolved_obj.errors:\n errors[key] = resolved_obj.errors[key]\n\n # Raising this exception will send the error message in the response of the GraphQL request\n raise GraphQLError(errors)\n\n return gql_optimizer.query(resolved_obj.qs.all(), info)\n\n return gql_optimizer.query(model.objects.restrict(info.context.user, \"view\").all(), info)\n\n list_resolver.__name__ = resolver_name\n return list_resolver\n\n\ndef generate_attrs_for_schema_type(schema_type):\n \"\"\"Generate both attributes and resolvers for a given schema_type.\n\n Args:\n schema_type (DjangoObjectType): DjangoObjectType for a given model\n\n Returns:\n dict: Dict of attributes ready to merge into the QueryMixin class\n \"\"\"\n attrs = {}\n model = schema_type._meta.model\n\n single_item_name = str_to_var_name(model._meta.verbose_name)\n list_name = str_to_var_name(model._meta.verbose_name_plural)\n\n # Define Attributes for single item and list with their search parameters\n search_params = generate_list_search_parameters(schema_type)\n attrs[single_item_name] = graphene.Field(schema_type, id=graphene.ID())\n attrs[list_name] = graphene.List(schema_type, **search_params)\n\n # Define Resolvers for both single item and list\n single_item_resolver_name = f\"{RESOLVER_PREFIX}{single_item_name}\"\n list_resolver_name = f\"{RESOLVER_PREFIX}{list_name}\"\n attrs[single_item_resolver_name] = generate_single_item_resolver(schema_type, single_item_resolver_name)\n attrs[list_resolver_name] = generate_list_resolver(schema_type, list_resolver_name)\n\n return attrs\n", "path": "nautobot/core/graphql/generators.py"}]} | 3,924 | 838 |
gh_patches_debug_30520 | rasdani/github-patches | git_diff | ray-project__ray-4518 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tune] Add `--output` to the Tune docs
We should add --output to the docs.
_Originally posted by @richardliaw in https://github.com/ray-project/ray/pull/4322#issuecomment-477903993_
cc @andrewztan
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/tune/commands.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import glob
6 import json
7 import logging
8 import os
9 import sys
10 import subprocess
11 import operator
12 from datetime import datetime
13
14 import pandas as pd
15 from pandas.api.types import is_string_dtype, is_numeric_dtype
16 from ray.tune.util import flatten_dict
17 from ray.tune.result import TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS
18 from ray.tune.trial import Trial
19 try:
20 from tabulate import tabulate
21 except ImportError:
22 tabulate = None
23
24 logger = logging.getLogger(__name__)
25
26 EDITOR = os.getenv("EDITOR", "vim")
27
28 TIMESTAMP_FORMAT = "%Y-%m-%d %H:%M:%S (%A)"
29
30 DEFAULT_EXPERIMENT_INFO_KEYS = (
31 "trainable_name",
32 "experiment_tag",
33 "trial_id",
34 "status",
35 "last_update_time",
36 )
37
38 DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS)
39
40 DEFAULT_PROJECT_INFO_KEYS = (
41 "name",
42 "total_trials",
43 "running_trials",
44 "terminated_trials",
45 "error_trials",
46 "last_updated",
47 )
48
49 try:
50 TERM_HEIGHT, TERM_WIDTH = subprocess.check_output(["stty", "size"]).split()
51 TERM_HEIGHT, TERM_WIDTH = int(TERM_HEIGHT), int(TERM_WIDTH)
52 except subprocess.CalledProcessError:
53 TERM_HEIGHT, TERM_WIDTH = 100, 100
54
55 OPERATORS = {
56 '<': operator.lt,
57 '<=': operator.le,
58 '==': operator.eq,
59 '!=': operator.ne,
60 '>=': operator.ge,
61 '>': operator.gt,
62 }
63
64
65 def _check_tabulate():
66 """Checks whether tabulate is installed."""
67 if tabulate is None:
68 raise ImportError(
69 "Tabulate not installed. Please run `pip install tabulate`.")
70
71
72 def print_format_output(dataframe):
73 """Prints output of given dataframe to fit into terminal.
74
75 Returns:
76 table (pd.DataFrame): Final outputted dataframe.
77 dropped_cols (list): Columns dropped due to terminal size.
78 empty_cols (list): Empty columns (dropped on default).
79 """
80 print_df = pd.DataFrame()
81 dropped_cols = []
82 empty_cols = []
83 # column display priority is based on the info_keys passed in
84 for i, col in enumerate(dataframe):
85 if dataframe[col].isnull().all():
86 # Don't add col to print_df if is fully empty
87 empty_cols += [col]
88 continue
89
90 print_df[col] = dataframe[col]
91 test_table = tabulate(print_df, headers="keys", tablefmt="psql")
92 if str(test_table).index('\n') > TERM_WIDTH:
93 # Drop all columns beyond terminal width
94 print_df.drop(col, axis=1, inplace=True)
95 dropped_cols += list(dataframe.columns)[i:]
96 break
97
98 table = tabulate(
99 print_df, headers="keys", tablefmt="psql", showindex="never")
100
101 print(table)
102 if dropped_cols:
103 print("Dropped columns:", dropped_cols)
104 print("Please increase your terminal size to view remaining columns.")
105 if empty_cols:
106 print("Empty columns:", empty_cols)
107
108 return table, dropped_cols, empty_cols
109
110
111 def _get_experiment_state(experiment_path, exit_on_fail=False):
112 experiment_path = os.path.expanduser(experiment_path)
113 experiment_state_paths = glob.glob(
114 os.path.join(experiment_path, "experiment_state*.json"))
115 if not experiment_state_paths:
116 if exit_on_fail:
117 print("No experiment state found!")
118 sys.exit(0)
119 else:
120 return
121 experiment_filename = max(list(experiment_state_paths))
122
123 with open(experiment_filename) as f:
124 experiment_state = json.load(f)
125 return experiment_state
126
127
128 def list_trials(experiment_path,
129 sort=None,
130 output=None,
131 filter_op=None,
132 info_keys=DEFAULT_EXPERIMENT_INFO_KEYS,
133 result_keys=DEFAULT_RESULT_KEYS):
134 """Lists trials in the directory subtree starting at the given path.
135
136 Args:
137 experiment_path (str): Directory where trials are located.
138 Corresponds to Experiment.local_dir/Experiment.name.
139 sort (str): Key to sort by.
140 output (str): Name of file where output is saved.
141 filter_op (str): Filter operation in the format
142 "<column> <operator> <value>".
143 info_keys (list): Keys that are displayed.
144 result_keys (list): Keys of last result that are displayed.
145 """
146 _check_tabulate()
147 experiment_state = _get_experiment_state(
148 experiment_path, exit_on_fail=True)
149
150 checkpoint_dicts = experiment_state["checkpoints"]
151 checkpoint_dicts = [flatten_dict(g) for g in checkpoint_dicts]
152 checkpoints_df = pd.DataFrame(checkpoint_dicts)
153
154 result_keys = ["last_result:{}".format(k) for k in result_keys]
155 col_keys = [
156 k for k in list(info_keys) + result_keys if k in checkpoints_df
157 ]
158 checkpoints_df = checkpoints_df[col_keys]
159
160 if "last_update_time" in checkpoints_df:
161 with pd.option_context("mode.use_inf_as_null", True):
162 datetime_series = checkpoints_df["last_update_time"].dropna()
163
164 datetime_series = datetime_series.apply(
165 lambda t: datetime.fromtimestamp(t).strftime(TIMESTAMP_FORMAT))
166 checkpoints_df["last_update_time"] = datetime_series
167
168 if "logdir" in checkpoints_df:
169 # logdir often too verbose to view in table, so drop experiment_path
170 checkpoints_df["logdir"] = checkpoints_df["logdir"].str.replace(
171 experiment_path, '')
172
173 if filter_op:
174 col, op, val = filter_op.split(' ')
175 col_type = checkpoints_df[col].dtype
176 if is_numeric_dtype(col_type):
177 val = float(val)
178 elif is_string_dtype(col_type):
179 val = str(val)
180 # TODO(Andrew): add support for datetime and boolean
181 else:
182 raise ValueError("Unsupported dtype for '{}': {}".format(
183 val, col_type))
184 op = OPERATORS[op]
185 filtered_index = op(checkpoints_df[col], val)
186 checkpoints_df = checkpoints_df[filtered_index]
187
188 if sort:
189 if sort not in checkpoints_df:
190 raise KeyError("Sort Index '{}' not in: {}".format(
191 sort, list(checkpoints_df)))
192 checkpoints_df = checkpoints_df.sort_values(by=sort)
193
194 print_format_output(checkpoints_df)
195
196 if output:
197 experiment_path = os.path.expanduser(experiment_path)
198 output_path = os.path.join(experiment_path, output)
199 file_extension = os.path.splitext(output)[1].lower()
200 if file_extension in (".p", ".pkl", ".pickle"):
201 checkpoints_df.to_pickle(output_path)
202 elif file_extension == ".csv":
203 checkpoints_df.to_csv(output_path, index=False)
204 else:
205 raise ValueError("Unsupported filetype: {}".format(output))
206 print("Output saved at:", output_path)
207
208
209 def list_experiments(project_path,
210 sort=None,
211 output=None,
212 filter_op=None,
213 info_keys=DEFAULT_PROJECT_INFO_KEYS):
214 """Lists experiments in the directory subtree.
215
216 Args:
217 project_path (str): Directory where experiments are located.
218 Corresponds to Experiment.local_dir.
219 sort (str): Key to sort by.
220 output (str): Name of file where output is saved.
221 filter_op (str): Filter operation in the format
222 "<column> <operator> <value>".
223 info_keys (list): Keys that are displayed.
224 """
225 _check_tabulate()
226 base, experiment_folders, _ = next(os.walk(project_path))
227
228 experiment_data_collection = []
229
230 for experiment_dir in experiment_folders:
231 experiment_state = _get_experiment_state(
232 os.path.join(base, experiment_dir))
233 if not experiment_state:
234 logger.debug("No experiment state found in %s", experiment_dir)
235 continue
236
237 checkpoints = pd.DataFrame(experiment_state["checkpoints"])
238 runner_data = experiment_state["runner_data"]
239
240 # Format time-based values.
241 time_values = {
242 "start_time": runner_data.get("_start_time"),
243 "last_updated": experiment_state.get("timestamp"),
244 }
245
246 formatted_time_values = {
247 key: datetime.fromtimestamp(val).strftime(TIMESTAMP_FORMAT)
248 if val else None
249 for key, val in time_values.items()
250 }
251
252 experiment_data = {
253 "name": experiment_dir,
254 "total_trials": checkpoints.shape[0],
255 "running_trials": (checkpoints["status"] == Trial.RUNNING).sum(),
256 "terminated_trials": (
257 checkpoints["status"] == Trial.TERMINATED).sum(),
258 "error_trials": (checkpoints["status"] == Trial.ERROR).sum(),
259 }
260 experiment_data.update(formatted_time_values)
261 experiment_data_collection.append(experiment_data)
262
263 if not experiment_data_collection:
264 print("No experiments found!")
265 sys.exit(0)
266
267 info_df = pd.DataFrame(experiment_data_collection)
268 col_keys = [k for k in list(info_keys) if k in info_df]
269 if not col_keys:
270 print("None of keys {} in experiment data!".format(info_keys))
271 sys.exit(0)
272 info_df = info_df[col_keys]
273
274 if filter_op:
275 col, op, val = filter_op.split(' ')
276 col_type = info_df[col].dtype
277 if is_numeric_dtype(col_type):
278 val = float(val)
279 elif is_string_dtype(col_type):
280 val = str(val)
281 # TODO(Andrew): add support for datetime and boolean
282 else:
283 raise ValueError("Unsupported dtype for '{}': {}".format(
284 val, col_type))
285 op = OPERATORS[op]
286 filtered_index = op(info_df[col], val)
287 info_df = info_df[filtered_index]
288
289 if sort:
290 if sort not in info_df:
291 raise KeyError("Sort Index '{}' not in: {}".format(
292 sort, list(info_df)))
293 info_df = info_df.sort_values(by=sort)
294
295 print_format_output(info_df)
296
297 if output:
298 output_path = os.path.join(base, output)
299 file_extension = os.path.splitext(output)[1].lower()
300 if file_extension in (".p", ".pkl", ".pickle"):
301 info_df.to_pickle(output_path)
302 elif file_extension == ".csv":
303 info_df.to_csv(output_path, index=False)
304 else:
305 raise ValueError("Unsupported filetype: {}".format(output))
306 print("Output saved at:", output_path)
307
308
309 def add_note(path, filename="note.txt"):
310 """Opens a txt file at the given path where user can add and save notes.
311
312 Args:
313 path (str): Directory where note will be saved.
314 filename (str): Name of note. Defaults to "note.txt"
315 """
316 path = os.path.expanduser(path)
317 assert os.path.isdir(path), "{} is not a valid directory.".format(path)
318
319 filepath = os.path.join(path, filename)
320 exists = os.path.isfile(filepath)
321
322 try:
323 subprocess.call([EDITOR, filepath])
324 except Exception as exc:
325 logger.error("Editing note failed!")
326 raise exc
327 if exists:
328 print("Note updated at:", filepath)
329 else:
330 print("Note created at:", filepath)
331
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/tune/commands.py b/python/ray/tune/commands.py
--- a/python/ray/tune/commands.py
+++ b/python/ray/tune/commands.py
@@ -194,16 +194,14 @@
print_format_output(checkpoints_df)
if output:
- experiment_path = os.path.expanduser(experiment_path)
- output_path = os.path.join(experiment_path, output)
file_extension = os.path.splitext(output)[1].lower()
if file_extension in (".p", ".pkl", ".pickle"):
- checkpoints_df.to_pickle(output_path)
+ checkpoints_df.to_pickle(output)
elif file_extension == ".csv":
- checkpoints_df.to_csv(output_path, index=False)
+ checkpoints_df.to_csv(output, index=False)
else:
raise ValueError("Unsupported filetype: {}".format(output))
- print("Output saved at:", output_path)
+ print("Output saved at:", output)
def list_experiments(project_path,
@@ -295,15 +293,14 @@
print_format_output(info_df)
if output:
- output_path = os.path.join(base, output)
file_extension = os.path.splitext(output)[1].lower()
if file_extension in (".p", ".pkl", ".pickle"):
- info_df.to_pickle(output_path)
+ info_df.to_pickle(output)
elif file_extension == ".csv":
- info_df.to_csv(output_path, index=False)
+ info_df.to_csv(output, index=False)
else:
raise ValueError("Unsupported filetype: {}".format(output))
- print("Output saved at:", output_path)
+ print("Output saved at:", output)
def add_note(path, filename="note.txt"):
| {"golden_diff": "diff --git a/python/ray/tune/commands.py b/python/ray/tune/commands.py\n--- a/python/ray/tune/commands.py\n+++ b/python/ray/tune/commands.py\n@@ -194,16 +194,14 @@\n print_format_output(checkpoints_df)\n \n if output:\n- experiment_path = os.path.expanduser(experiment_path)\n- output_path = os.path.join(experiment_path, output)\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n- checkpoints_df.to_pickle(output_path)\n+ checkpoints_df.to_pickle(output)\n elif file_extension == \".csv\":\n- checkpoints_df.to_csv(output_path, index=False)\n+ checkpoints_df.to_csv(output, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n- print(\"Output saved at:\", output_path)\n+ print(\"Output saved at:\", output)\n \n \n def list_experiments(project_path,\n@@ -295,15 +293,14 @@\n print_format_output(info_df)\n \n if output:\n- output_path = os.path.join(base, output)\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n- info_df.to_pickle(output_path)\n+ info_df.to_pickle(output)\n elif file_extension == \".csv\":\n- info_df.to_csv(output_path, index=False)\n+ info_df.to_csv(output, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n- print(\"Output saved at:\", output_path)\n+ print(\"Output saved at:\", output)\n \n \n def add_note(path, filename=\"note.txt\"):\n", "issue": "[tune] Add `--output` to the Tune docs\nWe should add --output to the docs.\r\n\r\n_Originally posted by @richardliaw in https://github.com/ray-project/ray/pull/4322#issuecomment-477903993_\r\n\r\ncc @andrewztan\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport glob\nimport json\nimport logging\nimport os\nimport sys\nimport subprocess\nimport operator\nfrom datetime import datetime\n\nimport pandas as pd\nfrom pandas.api.types import is_string_dtype, is_numeric_dtype\nfrom ray.tune.util import flatten_dict\nfrom ray.tune.result import TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS\nfrom ray.tune.trial import Trial\ntry:\n from tabulate import tabulate\nexcept ImportError:\n tabulate = None\n\nlogger = logging.getLogger(__name__)\n\nEDITOR = os.getenv(\"EDITOR\", \"vim\")\n\nTIMESTAMP_FORMAT = \"%Y-%m-%d %H:%M:%S (%A)\"\n\nDEFAULT_EXPERIMENT_INFO_KEYS = (\n \"trainable_name\",\n \"experiment_tag\",\n \"trial_id\",\n \"status\",\n \"last_update_time\",\n)\n\nDEFAULT_RESULT_KEYS = (TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS)\n\nDEFAULT_PROJECT_INFO_KEYS = (\n \"name\",\n \"total_trials\",\n \"running_trials\",\n \"terminated_trials\",\n \"error_trials\",\n \"last_updated\",\n)\n\ntry:\n TERM_HEIGHT, TERM_WIDTH = subprocess.check_output([\"stty\", \"size\"]).split()\n TERM_HEIGHT, TERM_WIDTH = int(TERM_HEIGHT), int(TERM_WIDTH)\nexcept subprocess.CalledProcessError:\n TERM_HEIGHT, TERM_WIDTH = 100, 100\n\nOPERATORS = {\n '<': operator.lt,\n '<=': operator.le,\n '==': operator.eq,\n '!=': operator.ne,\n '>=': operator.ge,\n '>': operator.gt,\n}\n\n\ndef _check_tabulate():\n \"\"\"Checks whether tabulate is installed.\"\"\"\n if tabulate is None:\n raise ImportError(\n \"Tabulate not installed. Please run `pip install tabulate`.\")\n\n\ndef print_format_output(dataframe):\n \"\"\"Prints output of given dataframe to fit into terminal.\n\n Returns:\n table (pd.DataFrame): Final outputted dataframe.\n dropped_cols (list): Columns dropped due to terminal size.\n empty_cols (list): Empty columns (dropped on default).\n \"\"\"\n print_df = pd.DataFrame()\n dropped_cols = []\n empty_cols = []\n # column display priority is based on the info_keys passed in\n for i, col in enumerate(dataframe):\n if dataframe[col].isnull().all():\n # Don't add col to print_df if is fully empty\n empty_cols += [col]\n continue\n\n print_df[col] = dataframe[col]\n test_table = tabulate(print_df, headers=\"keys\", tablefmt=\"psql\")\n if str(test_table).index('\\n') > TERM_WIDTH:\n # Drop all columns beyond terminal width\n print_df.drop(col, axis=1, inplace=True)\n dropped_cols += list(dataframe.columns)[i:]\n break\n\n table = tabulate(\n print_df, headers=\"keys\", tablefmt=\"psql\", showindex=\"never\")\n\n print(table)\n if dropped_cols:\n print(\"Dropped columns:\", dropped_cols)\n print(\"Please increase your terminal size to view remaining columns.\")\n if empty_cols:\n print(\"Empty columns:\", empty_cols)\n\n return table, dropped_cols, empty_cols\n\n\ndef _get_experiment_state(experiment_path, exit_on_fail=False):\n experiment_path = os.path.expanduser(experiment_path)\n experiment_state_paths = glob.glob(\n os.path.join(experiment_path, \"experiment_state*.json\"))\n if not experiment_state_paths:\n if exit_on_fail:\n print(\"No experiment state found!\")\n sys.exit(0)\n else:\n return\n experiment_filename = max(list(experiment_state_paths))\n\n with open(experiment_filename) as f:\n experiment_state = json.load(f)\n return experiment_state\n\n\ndef list_trials(experiment_path,\n sort=None,\n output=None,\n filter_op=None,\n info_keys=DEFAULT_EXPERIMENT_INFO_KEYS,\n result_keys=DEFAULT_RESULT_KEYS):\n \"\"\"Lists trials in the directory subtree starting at the given path.\n\n Args:\n experiment_path (str): Directory where trials are located.\n Corresponds to Experiment.local_dir/Experiment.name.\n sort (str): Key to sort by.\n output (str): Name of file where output is saved.\n filter_op (str): Filter operation in the format\n \"<column> <operator> <value>\".\n info_keys (list): Keys that are displayed.\n result_keys (list): Keys of last result that are displayed.\n \"\"\"\n _check_tabulate()\n experiment_state = _get_experiment_state(\n experiment_path, exit_on_fail=True)\n\n checkpoint_dicts = experiment_state[\"checkpoints\"]\n checkpoint_dicts = [flatten_dict(g) for g in checkpoint_dicts]\n checkpoints_df = pd.DataFrame(checkpoint_dicts)\n\n result_keys = [\"last_result:{}\".format(k) for k in result_keys]\n col_keys = [\n k for k in list(info_keys) + result_keys if k in checkpoints_df\n ]\n checkpoints_df = checkpoints_df[col_keys]\n\n if \"last_update_time\" in checkpoints_df:\n with pd.option_context(\"mode.use_inf_as_null\", True):\n datetime_series = checkpoints_df[\"last_update_time\"].dropna()\n\n datetime_series = datetime_series.apply(\n lambda t: datetime.fromtimestamp(t).strftime(TIMESTAMP_FORMAT))\n checkpoints_df[\"last_update_time\"] = datetime_series\n\n if \"logdir\" in checkpoints_df:\n # logdir often too verbose to view in table, so drop experiment_path\n checkpoints_df[\"logdir\"] = checkpoints_df[\"logdir\"].str.replace(\n experiment_path, '')\n\n if filter_op:\n col, op, val = filter_op.split(' ')\n col_type = checkpoints_df[col].dtype\n if is_numeric_dtype(col_type):\n val = float(val)\n elif is_string_dtype(col_type):\n val = str(val)\n # TODO(Andrew): add support for datetime and boolean\n else:\n raise ValueError(\"Unsupported dtype for '{}': {}\".format(\n val, col_type))\n op = OPERATORS[op]\n filtered_index = op(checkpoints_df[col], val)\n checkpoints_df = checkpoints_df[filtered_index]\n\n if sort:\n if sort not in checkpoints_df:\n raise KeyError(\"Sort Index '{}' not in: {}\".format(\n sort, list(checkpoints_df)))\n checkpoints_df = checkpoints_df.sort_values(by=sort)\n\n print_format_output(checkpoints_df)\n\n if output:\n experiment_path = os.path.expanduser(experiment_path)\n output_path = os.path.join(experiment_path, output)\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n checkpoints_df.to_pickle(output_path)\n elif file_extension == \".csv\":\n checkpoints_df.to_csv(output_path, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n print(\"Output saved at:\", output_path)\n\n\ndef list_experiments(project_path,\n sort=None,\n output=None,\n filter_op=None,\n info_keys=DEFAULT_PROJECT_INFO_KEYS):\n \"\"\"Lists experiments in the directory subtree.\n\n Args:\n project_path (str): Directory where experiments are located.\n Corresponds to Experiment.local_dir.\n sort (str): Key to sort by.\n output (str): Name of file where output is saved.\n filter_op (str): Filter operation in the format\n \"<column> <operator> <value>\".\n info_keys (list): Keys that are displayed.\n \"\"\"\n _check_tabulate()\n base, experiment_folders, _ = next(os.walk(project_path))\n\n experiment_data_collection = []\n\n for experiment_dir in experiment_folders:\n experiment_state = _get_experiment_state(\n os.path.join(base, experiment_dir))\n if not experiment_state:\n logger.debug(\"No experiment state found in %s\", experiment_dir)\n continue\n\n checkpoints = pd.DataFrame(experiment_state[\"checkpoints\"])\n runner_data = experiment_state[\"runner_data\"]\n\n # Format time-based values.\n time_values = {\n \"start_time\": runner_data.get(\"_start_time\"),\n \"last_updated\": experiment_state.get(\"timestamp\"),\n }\n\n formatted_time_values = {\n key: datetime.fromtimestamp(val).strftime(TIMESTAMP_FORMAT)\n if val else None\n for key, val in time_values.items()\n }\n\n experiment_data = {\n \"name\": experiment_dir,\n \"total_trials\": checkpoints.shape[0],\n \"running_trials\": (checkpoints[\"status\"] == Trial.RUNNING).sum(),\n \"terminated_trials\": (\n checkpoints[\"status\"] == Trial.TERMINATED).sum(),\n \"error_trials\": (checkpoints[\"status\"] == Trial.ERROR).sum(),\n }\n experiment_data.update(formatted_time_values)\n experiment_data_collection.append(experiment_data)\n\n if not experiment_data_collection:\n print(\"No experiments found!\")\n sys.exit(0)\n\n info_df = pd.DataFrame(experiment_data_collection)\n col_keys = [k for k in list(info_keys) if k in info_df]\n if not col_keys:\n print(\"None of keys {} in experiment data!\".format(info_keys))\n sys.exit(0)\n info_df = info_df[col_keys]\n\n if filter_op:\n col, op, val = filter_op.split(' ')\n col_type = info_df[col].dtype\n if is_numeric_dtype(col_type):\n val = float(val)\n elif is_string_dtype(col_type):\n val = str(val)\n # TODO(Andrew): add support for datetime and boolean\n else:\n raise ValueError(\"Unsupported dtype for '{}': {}\".format(\n val, col_type))\n op = OPERATORS[op]\n filtered_index = op(info_df[col], val)\n info_df = info_df[filtered_index]\n\n if sort:\n if sort not in info_df:\n raise KeyError(\"Sort Index '{}' not in: {}\".format(\n sort, list(info_df)))\n info_df = info_df.sort_values(by=sort)\n\n print_format_output(info_df)\n\n if output:\n output_path = os.path.join(base, output)\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n info_df.to_pickle(output_path)\n elif file_extension == \".csv\":\n info_df.to_csv(output_path, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n print(\"Output saved at:\", output_path)\n\n\ndef add_note(path, filename=\"note.txt\"):\n \"\"\"Opens a txt file at the given path where user can add and save notes.\n\n Args:\n path (str): Directory where note will be saved.\n filename (str): Name of note. Defaults to \"note.txt\"\n \"\"\"\n path = os.path.expanduser(path)\n assert os.path.isdir(path), \"{} is not a valid directory.\".format(path)\n\n filepath = os.path.join(path, filename)\n exists = os.path.isfile(filepath)\n\n try:\n subprocess.call([EDITOR, filepath])\n except Exception as exc:\n logger.error(\"Editing note failed!\")\n raise exc\n if exists:\n print(\"Note updated at:\", filepath)\n else:\n print(\"Note created at:\", filepath)\n", "path": "python/ray/tune/commands.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport glob\nimport json\nimport logging\nimport os\nimport sys\nimport subprocess\nimport operator\nfrom datetime import datetime\n\nimport pandas as pd\nfrom pandas.api.types import is_string_dtype, is_numeric_dtype\nfrom ray.tune.util import flatten_dict\nfrom ray.tune.result import TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS\nfrom ray.tune.trial import Trial\ntry:\n from tabulate import tabulate\nexcept ImportError:\n tabulate = None\n\nlogger = logging.getLogger(__name__)\n\nEDITOR = os.getenv(\"EDITOR\", \"vim\")\n\nTIMESTAMP_FORMAT = \"%Y-%m-%d %H:%M:%S (%A)\"\n\nDEFAULT_EXPERIMENT_INFO_KEYS = (\n \"trainable_name\",\n \"experiment_tag\",\n \"trial_id\",\n \"status\",\n \"last_update_time\",\n)\n\nDEFAULT_RESULT_KEYS = (TRAINING_ITERATION, MEAN_ACCURACY, MEAN_LOSS)\n\nDEFAULT_PROJECT_INFO_KEYS = (\n \"name\",\n \"total_trials\",\n \"running_trials\",\n \"terminated_trials\",\n \"error_trials\",\n \"last_updated\",\n)\n\ntry:\n TERM_HEIGHT, TERM_WIDTH = subprocess.check_output([\"stty\", \"size\"]).split()\n TERM_HEIGHT, TERM_WIDTH = int(TERM_HEIGHT), int(TERM_WIDTH)\nexcept subprocess.CalledProcessError:\n TERM_HEIGHT, TERM_WIDTH = 100, 100\n\nOPERATORS = {\n '<': operator.lt,\n '<=': operator.le,\n '==': operator.eq,\n '!=': operator.ne,\n '>=': operator.ge,\n '>': operator.gt,\n}\n\n\ndef _check_tabulate():\n \"\"\"Checks whether tabulate is installed.\"\"\"\n if tabulate is None:\n raise ImportError(\n \"Tabulate not installed. Please run `pip install tabulate`.\")\n\n\ndef print_format_output(dataframe):\n \"\"\"Prints output of given dataframe to fit into terminal.\n\n Returns:\n table (pd.DataFrame): Final outputted dataframe.\n dropped_cols (list): Columns dropped due to terminal size.\n empty_cols (list): Empty columns (dropped on default).\n \"\"\"\n print_df = pd.DataFrame()\n dropped_cols = []\n empty_cols = []\n # column display priority is based on the info_keys passed in\n for i, col in enumerate(dataframe):\n if dataframe[col].isnull().all():\n # Don't add col to print_df if is fully empty\n empty_cols += [col]\n continue\n\n print_df[col] = dataframe[col]\n test_table = tabulate(print_df, headers=\"keys\", tablefmt=\"psql\")\n if str(test_table).index('\\n') > TERM_WIDTH:\n # Drop all columns beyond terminal width\n print_df.drop(col, axis=1, inplace=True)\n dropped_cols += list(dataframe.columns)[i:]\n break\n\n table = tabulate(\n print_df, headers=\"keys\", tablefmt=\"psql\", showindex=\"never\")\n\n print(table)\n if dropped_cols:\n print(\"Dropped columns:\", dropped_cols)\n print(\"Please increase your terminal size to view remaining columns.\")\n if empty_cols:\n print(\"Empty columns:\", empty_cols)\n\n return table, dropped_cols, empty_cols\n\n\ndef _get_experiment_state(experiment_path, exit_on_fail=False):\n experiment_path = os.path.expanduser(experiment_path)\n experiment_state_paths = glob.glob(\n os.path.join(experiment_path, \"experiment_state*.json\"))\n if not experiment_state_paths:\n if exit_on_fail:\n print(\"No experiment state found!\")\n sys.exit(0)\n else:\n return\n experiment_filename = max(list(experiment_state_paths))\n\n with open(experiment_filename) as f:\n experiment_state = json.load(f)\n return experiment_state\n\n\ndef list_trials(experiment_path,\n sort=None,\n output=None,\n filter_op=None,\n info_keys=DEFAULT_EXPERIMENT_INFO_KEYS,\n result_keys=DEFAULT_RESULT_KEYS):\n \"\"\"Lists trials in the directory subtree starting at the given path.\n\n Args:\n experiment_path (str): Directory where trials are located.\n Corresponds to Experiment.local_dir/Experiment.name.\n sort (str): Key to sort by.\n output (str): Name of file where output is saved.\n filter_op (str): Filter operation in the format\n \"<column> <operator> <value>\".\n info_keys (list): Keys that are displayed.\n result_keys (list): Keys of last result that are displayed.\n \"\"\"\n _check_tabulate()\n experiment_state = _get_experiment_state(\n experiment_path, exit_on_fail=True)\n\n checkpoint_dicts = experiment_state[\"checkpoints\"]\n checkpoint_dicts = [flatten_dict(g) for g in checkpoint_dicts]\n checkpoints_df = pd.DataFrame(checkpoint_dicts)\n\n result_keys = [\"last_result:{}\".format(k) for k in result_keys]\n col_keys = [\n k for k in list(info_keys) + result_keys if k in checkpoints_df\n ]\n checkpoints_df = checkpoints_df[col_keys]\n\n if \"last_update_time\" in checkpoints_df:\n with pd.option_context(\"mode.use_inf_as_null\", True):\n datetime_series = checkpoints_df[\"last_update_time\"].dropna()\n\n datetime_series = datetime_series.apply(\n lambda t: datetime.fromtimestamp(t).strftime(TIMESTAMP_FORMAT))\n checkpoints_df[\"last_update_time\"] = datetime_series\n\n if \"logdir\" in checkpoints_df:\n # logdir often too verbose to view in table, so drop experiment_path\n checkpoints_df[\"logdir\"] = checkpoints_df[\"logdir\"].str.replace(\n experiment_path, '')\n\n if filter_op:\n col, op, val = filter_op.split(' ')\n col_type = checkpoints_df[col].dtype\n if is_numeric_dtype(col_type):\n val = float(val)\n elif is_string_dtype(col_type):\n val = str(val)\n # TODO(Andrew): add support for datetime and boolean\n else:\n raise ValueError(\"Unsupported dtype for '{}': {}\".format(\n val, col_type))\n op = OPERATORS[op]\n filtered_index = op(checkpoints_df[col], val)\n checkpoints_df = checkpoints_df[filtered_index]\n\n if sort:\n if sort not in checkpoints_df:\n raise KeyError(\"Sort Index '{}' not in: {}\".format(\n sort, list(checkpoints_df)))\n checkpoints_df = checkpoints_df.sort_values(by=sort)\n\n print_format_output(checkpoints_df)\n\n if output:\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n checkpoints_df.to_pickle(output)\n elif file_extension == \".csv\":\n checkpoints_df.to_csv(output, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n print(\"Output saved at:\", output)\n\n\ndef list_experiments(project_path,\n sort=None,\n output=None,\n filter_op=None,\n info_keys=DEFAULT_PROJECT_INFO_KEYS):\n \"\"\"Lists experiments in the directory subtree.\n\n Args:\n project_path (str): Directory where experiments are located.\n Corresponds to Experiment.local_dir.\n sort (str): Key to sort by.\n output (str): Name of file where output is saved.\n filter_op (str): Filter operation in the format\n \"<column> <operator> <value>\".\n info_keys (list): Keys that are displayed.\n \"\"\"\n _check_tabulate()\n base, experiment_folders, _ = next(os.walk(project_path))\n\n experiment_data_collection = []\n\n for experiment_dir in experiment_folders:\n experiment_state = _get_experiment_state(\n os.path.join(base, experiment_dir))\n if not experiment_state:\n logger.debug(\"No experiment state found in %s\", experiment_dir)\n continue\n\n checkpoints = pd.DataFrame(experiment_state[\"checkpoints\"])\n runner_data = experiment_state[\"runner_data\"]\n\n # Format time-based values.\n time_values = {\n \"start_time\": runner_data.get(\"_start_time\"),\n \"last_updated\": experiment_state.get(\"timestamp\"),\n }\n\n formatted_time_values = {\n key: datetime.fromtimestamp(val).strftime(TIMESTAMP_FORMAT)\n if val else None\n for key, val in time_values.items()\n }\n\n experiment_data = {\n \"name\": experiment_dir,\n \"total_trials\": checkpoints.shape[0],\n \"running_trials\": (checkpoints[\"status\"] == Trial.RUNNING).sum(),\n \"terminated_trials\": (\n checkpoints[\"status\"] == Trial.TERMINATED).sum(),\n \"error_trials\": (checkpoints[\"status\"] == Trial.ERROR).sum(),\n }\n experiment_data.update(formatted_time_values)\n experiment_data_collection.append(experiment_data)\n\n if not experiment_data_collection:\n print(\"No experiments found!\")\n sys.exit(0)\n\n info_df = pd.DataFrame(experiment_data_collection)\n col_keys = [k for k in list(info_keys) if k in info_df]\n if not col_keys:\n print(\"None of keys {} in experiment data!\".format(info_keys))\n sys.exit(0)\n info_df = info_df[col_keys]\n\n if filter_op:\n col, op, val = filter_op.split(' ')\n col_type = info_df[col].dtype\n if is_numeric_dtype(col_type):\n val = float(val)\n elif is_string_dtype(col_type):\n val = str(val)\n # TODO(Andrew): add support for datetime and boolean\n else:\n raise ValueError(\"Unsupported dtype for '{}': {}\".format(\n val, col_type))\n op = OPERATORS[op]\n filtered_index = op(info_df[col], val)\n info_df = info_df[filtered_index]\n\n if sort:\n if sort not in info_df:\n raise KeyError(\"Sort Index '{}' not in: {}\".format(\n sort, list(info_df)))\n info_df = info_df.sort_values(by=sort)\n\n print_format_output(info_df)\n\n if output:\n file_extension = os.path.splitext(output)[1].lower()\n if file_extension in (\".p\", \".pkl\", \".pickle\"):\n info_df.to_pickle(output)\n elif file_extension == \".csv\":\n info_df.to_csv(output, index=False)\n else:\n raise ValueError(\"Unsupported filetype: {}\".format(output))\n print(\"Output saved at:\", output)\n\n\ndef add_note(path, filename=\"note.txt\"):\n \"\"\"Opens a txt file at the given path where user can add and save notes.\n\n Args:\n path (str): Directory where note will be saved.\n filename (str): Name of note. Defaults to \"note.txt\"\n \"\"\"\n path = os.path.expanduser(path)\n assert os.path.isdir(path), \"{} is not a valid directory.\".format(path)\n\n filepath = os.path.join(path, filename)\n exists = os.path.isfile(filepath)\n\n try:\n subprocess.call([EDITOR, filepath])\n except Exception as exc:\n logger.error(\"Editing note failed!\")\n raise exc\n if exists:\n print(\"Note updated at:\", filepath)\n else:\n print(\"Note created at:\", filepath)\n", "path": "python/ray/tune/commands.py"}]} | 3,646 | 383 |
gh_patches_debug_27270 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleNLP-2090 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ann_utils 当 output_emb_size = 0 时报错
欢迎您反馈PaddleNLP使用问题,非常感谢您对PaddleNLP的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleNLP和PaddlePaddle版本:请提供您的PaddleNLP和PaddlePaddle版本号,例如PaddleNLP 2.0.4,PaddlePaddle2.1.1
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本
- 复现信息:如为报错,请给出复现环境、复现步骤
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/neural_search/recall/in_batch_negative/ann_util.py`
Content:
```
1 # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # coding=UTF-8
16
17 import numpy as np
18 import hnswlib
19 from paddlenlp.utils.log import logger
20
21
22 def build_index(args, data_loader, model):
23
24 index = hnswlib.Index(space='ip', dim=args.output_emb_size)
25
26 # Initializing index
27 # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
28 # during insertion of an element.
29 # The capacity can be increased by saving/loading the index, see below.
30 #
31 # ef_construction - controls index search speed/build speed tradeoff
32 #
33 # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)
34 # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction
35 index.init_index(
36 max_elements=args.hnsw_max_elements,
37 ef_construction=args.hnsw_ef,
38 M=args.hnsw_m)
39
40 # Controlling the recall by setting ef:
41 # higher ef leads to better accuracy, but slower search
42 index.set_ef(args.hnsw_ef)
43
44 # Set number of threads used during batch search/construction
45 # By default using all available cores
46 index.set_num_threads(16)
47
48 logger.info("start build index..........")
49
50 all_embeddings = []
51
52 for text_embeddings in model.get_semantic_embedding(data_loader):
53 all_embeddings.append(text_embeddings.numpy())
54
55 all_embeddings = np.concatenate(all_embeddings, axis=0)
56 index.add_items(all_embeddings)
57
58 logger.info("Total index number:{}".format(index.get_current_count()))
59
60 return index
61
```
Path: `applications/question_answering/faq_system/ann_util.py`
Content:
```
1 # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import hnswlib
17 from paddlenlp.utils.log import logger
18
19
20 def build_index(args, data_loader, model):
21
22 index = hnswlib.Index(space='ip', dim=args.output_emb_size)
23
24 # Initializing index
25 # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
26 # during insertion of an element.
27 # The capacity can be increased by saving/loading the index, see below.
28 #
29 # ef_construction - controls index search speed/build speed tradeoff
30 #
31 # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)
32 # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction
33 index.init_index(
34 max_elements=args.hnsw_max_elements,
35 ef_construction=args.hnsw_ef,
36 M=args.hnsw_m)
37
38 # Controlling the recall by setting ef:
39 # higher ef leads to better accuracy, but slower search
40 index.set_ef(args.hnsw_ef)
41
42 # Set number of threads used during batch search/construction
43 # By default using all available cores
44 index.set_num_threads(16)
45
46 logger.info("start build index..........")
47
48 all_embeddings = []
49
50 for text_embeddings in model.get_semantic_embedding(data_loader):
51 all_embeddings.append(text_embeddings.numpy())
52
53 all_embeddings = np.concatenate(all_embeddings, axis=0)
54 index.add_items(all_embeddings)
55
56 logger.info("Total index number:{}".format(index.get_current_count()))
57
58 return index
59
```
Path: `applications/question_answering/faq_finance/ann_util.py`
Content:
```
1 # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import hnswlib
17 from paddlenlp.utils.log import logger
18
19
20 def build_index(args, data_loader, model):
21
22 index = hnswlib.Index(space='ip', dim=args.output_emb_size)
23
24 # Initializing index
25 # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
26 # during insertion of an element.
27 # The capacity can be increased by saving/loading the index, see below.
28 #
29 # ef_construction - controls index search speed/build speed tradeoff
30 #
31 # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)
32 # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction
33 index.init_index(
34 max_elements=args.hnsw_max_elements,
35 ef_construction=args.hnsw_ef,
36 M=args.hnsw_m)
37
38 # Controlling the recall by setting ef:
39 # higher ef leads to better accuracy, but slower search
40 index.set_ef(args.hnsw_ef)
41
42 # Set number of threads used during batch search/construction
43 # By default using all available cores
44 index.set_num_threads(16)
45
46 logger.info("start build index..........")
47
48 all_embeddings = []
49
50 for text_embeddings in model.get_semantic_embedding(data_loader):
51 all_embeddings.append(text_embeddings.numpy())
52
53 all_embeddings = np.concatenate(all_embeddings, axis=0)
54 index.add_items(all_embeddings)
55
56 logger.info("Total index number:{}".format(index.get_current_count()))
57
58 return index
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/applications/neural_search/recall/in_batch_negative/ann_util.py b/applications/neural_search/recall/in_batch_negative/ann_util.py
--- a/applications/neural_search/recall/in_batch_negative/ann_util.py
+++ b/applications/neural_search/recall/in_batch_negative/ann_util.py
@@ -21,7 +21,9 @@
def build_index(args, data_loader, model):
- index = hnswlib.Index(space='ip', dim=args.output_emb_size)
+ index = hnswlib.Index(
+ space='ip',
+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)
# Initializing index
# max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
diff --git a/applications/question_answering/faq_finance/ann_util.py b/applications/question_answering/faq_finance/ann_util.py
--- a/applications/question_answering/faq_finance/ann_util.py
+++ b/applications/question_answering/faq_finance/ann_util.py
@@ -19,7 +19,9 @@
def build_index(args, data_loader, model):
- index = hnswlib.Index(space='ip', dim=args.output_emb_size)
+ index = hnswlib.Index(
+ space='ip',
+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)
# Initializing index
# max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
diff --git a/applications/question_answering/faq_system/ann_util.py b/applications/question_answering/faq_system/ann_util.py
--- a/applications/question_answering/faq_system/ann_util.py
+++ b/applications/question_answering/faq_system/ann_util.py
@@ -19,7 +19,9 @@
def build_index(args, data_loader, model):
- index = hnswlib.Index(space='ip', dim=args.output_emb_size)
+ index = hnswlib.Index(
+ space='ip',
+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)
# Initializing index
# max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded
| {"golden_diff": "diff --git a/applications/neural_search/recall/in_batch_negative/ann_util.py b/applications/neural_search/recall/in_batch_negative/ann_util.py\n--- a/applications/neural_search/recall/in_batch_negative/ann_util.py\n+++ b/applications/neural_search/recall/in_batch_negative/ann_util.py\n@@ -21,7 +21,9 @@\n \n def build_index(args, data_loader, model):\n \n- index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n+ index = hnswlib.Index(\n+ space='ip',\n+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n \n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\ndiff --git a/applications/question_answering/faq_finance/ann_util.py b/applications/question_answering/faq_finance/ann_util.py\n--- a/applications/question_answering/faq_finance/ann_util.py\n+++ b/applications/question_answering/faq_finance/ann_util.py\n@@ -19,7 +19,9 @@\n \n def build_index(args, data_loader, model):\n \n- index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n+ index = hnswlib.Index(\n+ space='ip',\n+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n \n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\ndiff --git a/applications/question_answering/faq_system/ann_util.py b/applications/question_answering/faq_system/ann_util.py\n--- a/applications/question_answering/faq_system/ann_util.py\n+++ b/applications/question_answering/faq_system/ann_util.py\n@@ -19,7 +19,9 @@\n \n def build_index(args, data_loader, model):\n \n- index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n+ index = hnswlib.Index(\n+ space='ip',\n+ dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n \n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n", "issue": "ann_utils \u5f53 output_emb_size = 0 \u65f6\u62a5\u9519\n\u6b22\u8fce\u60a8\u53cd\u9988PaddleNLP\u4f7f\u7528\u95ee\u9898\uff0c\u975e\u5e38\u611f\u8c22\u60a8\u5bf9PaddleNLP\u7684\u8d21\u732e\uff01\r\n\u5728\u7559\u4e0b\u60a8\u7684\u95ee\u9898\u65f6\uff0c\u8f9b\u82e6\u60a8\u540c\u6b65\u63d0\u4f9b\u5982\u4e0b\u4fe1\u606f\uff1a\r\n- \u7248\u672c\u3001\u73af\u5883\u4fe1\u606f\r\n1\uff09PaddleNLP\u548cPaddlePaddle\u7248\u672c\uff1a\u8bf7\u63d0\u4f9b\u60a8\u7684PaddleNLP\u548cPaddlePaddle\u7248\u672c\u53f7\uff0c\u4f8b\u5982PaddleNLP 2.0.4\uff0cPaddlePaddle2.1.1\r\n2\uff09\u7cfb\u7edf\u73af\u5883\uff1a\u8bf7\u60a8\u63cf\u8ff0\u7cfb\u7edf\u7c7b\u578b\uff0c\u4f8b\u5982Linux/Windows/MacOS/\uff0cpython\u7248\u672c\r\n- \u590d\u73b0\u4fe1\u606f\uff1a\u5982\u4e3a\u62a5\u9519\uff0c\u8bf7\u7ed9\u51fa\u590d\u73b0\u73af\u5883\u3001\u590d\u73b0\u6b65\u9aa4\r\n\n", "before_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coding=UTF-8\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/neural_search/recall/in_batch_negative/ann_util.py"}, {"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/question_answering/faq_system/ann_util.py"}, {"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(space='ip', dim=args.output_emb_size)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/question_answering/faq_finance/ann_util.py"}], "after_files": [{"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# coding=UTF-8\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(\n space='ip',\n dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/neural_search/recall/in_batch_negative/ann_util.py"}, {"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(\n space='ip',\n dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/question_answering/faq_system/ann_util.py"}, {"content": "# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport hnswlib\nfrom paddlenlp.utils.log import logger\n\n\ndef build_index(args, data_loader, model):\n\n index = hnswlib.Index(\n space='ip',\n dim=args.output_emb_size if args.output_emb_size > 0 else 768)\n\n # Initializing index\n # max_elements - the maximum number of elements (capacity). Will throw an exception if exceeded\n # during insertion of an element.\n # The capacity can be increased by saving/loading the index, see below.\n #\n # ef_construction - controls index search speed/build speed tradeoff\n #\n # M - is tightly connected with internal dimensionality of the data. Strongly affects memory consumption (~M)\n # Higher M leads to higher accuracy/run_time at fixed ef/efConstruction\n index.init_index(\n max_elements=args.hnsw_max_elements,\n ef_construction=args.hnsw_ef,\n M=args.hnsw_m)\n\n # Controlling the recall by setting ef:\n # higher ef leads to better accuracy, but slower search\n index.set_ef(args.hnsw_ef)\n\n # Set number of threads used during batch search/construction\n # By default using all available cores\n index.set_num_threads(16)\n\n logger.info(\"start build index..........\")\n\n all_embeddings = []\n\n for text_embeddings in model.get_semantic_embedding(data_loader):\n all_embeddings.append(text_embeddings.numpy())\n\n all_embeddings = np.concatenate(all_embeddings, axis=0)\n index.add_items(all_embeddings)\n\n logger.info(\"Total index number:{}\".format(index.get_current_count()))\n\n return index\n", "path": "applications/question_answering/faq_finance/ann_util.py"}]} | 2,235 | 512 |
gh_patches_debug_1577 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
KeyError: 'db' for DroppedSpan when running scan query on Elasticsearch
**Describe the bug**: ...
Elastic APM fails with a `KeyError: db key not found`.
The application where this happens is a Django project that stores/reads data from Elasticsearch. I have APM enable (APM server and Elasticsearch cluster are both running on Elastic Cloud). The library fails with the aforementioned error (shown in the snippet and screenshots below) while running a scan query on Elasticsearch. It looks like it's dropping some spans, which ends up in this case:
```
hits = self._get_hits(result_data)
if hits:
span.context["db"]["rows_affected"] = hits
return result_data
```
here's a screenshot of what I see on the APM Error page:

Few variables from the context:

**To Reproduce**
Unfortunately, I don't have a reproducible snippet.
**Environment (please complete the following information)**
- OS: Linux (containerized)
- Python version: 3.9.15
- Framework and version [e.g. Django 2.1]:
- APM Server version: 7.17.4
- Agent version:
```
$ pip freeze | grep elastic
django-elasticsearch-dsl==7.2.2
elastic-apm==6.13.1
elasticsearch==7.17.4
elasticsearch-dsl==7.4.0
```
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/elasticsearch.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from __future__ import absolute_import
32
33 import re
34 from typing import Optional
35 from urllib.parse import parse_qs, urlparse
36
37 import elasticapm
38 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
39 from elasticapm.traces import DroppedSpan, execution_context
40 from elasticapm.utils.logging import get_logger
41
42 logger = get_logger("elasticapm.instrument")
43
44 should_capture_body_re = re.compile("/(_search|_msearch|_count|_async_search|_sql|_eql)(/|$)")
45
46
47 class ElasticsearchConnectionInstrumentation(AbstractInstrumentedModule):
48 name = "elasticsearch_connection"
49
50 def get_instrument_list(self):
51 try:
52 import elastic_transport # noqa: F401
53
54 return [
55 ("elastic_transport._node._http_urllib3", "Urllib3HttpNode.perform_request"),
56 ("elastic_transport._node._http_requests", "RequestsHttpNode.perform_request"),
57 ]
58 except ImportError:
59 return [
60 ("elasticsearch.connection.http_urllib3", "Urllib3HttpConnection.perform_request"),
61 ("elasticsearch.connection.http_requests", "RequestsHttpConnection.perform_request"),
62 ]
63
64 def call(self, module, method, wrapped, instance, args, kwargs):
65 span = execution_context.get_span()
66 if not span or isinstance(span, DroppedSpan):
67 return wrapped(*args, **kwargs)
68
69 self._update_context_by_request_data(span.context, instance, args, kwargs)
70
71 result = wrapped(*args, **kwargs)
72 if hasattr(result, "meta"): # elasticsearch-py 8.x+
73 status_code = result.meta.status
74 cluster = result.meta.headers.get("x-found-handling-cluster")
75 else:
76 status_code = result[0]
77 cluster = result[1].get("x-found-handling-cluster")
78 span.context["http"] = {"status_code": status_code}
79 if cluster:
80 span.context["db"] = {"instance": cluster}
81
82 return result
83
84 def _update_context_by_request_data(self, context, instance, args, kwargs):
85 args_len = len(args)
86 url = args[1] if args_len > 1 else kwargs.get("url")
87 params = args[2] if args_len > 2 else kwargs.get("params")
88 body_serialized = args[3] if args_len > 3 else kwargs.get("body")
89
90 if "?" in url and not params:
91 url, qs = url.split("?", 1)
92 params = {k: v[0] for k, v in parse_qs(qs).items()}
93
94 should_capture_body = bool(should_capture_body_re.search(url))
95
96 context["db"] = {"type": "elasticsearch"}
97 if should_capture_body:
98 query = []
99 # using both q AND body is allowed in some API endpoints / ES versions,
100 # but not in others. We simply capture both if they are there so the
101 # user can see it.
102 if params and "q" in params:
103 # 'q' may already be encoded to a byte string at this point.
104 # We assume utf8, which is the default
105 q = params["q"]
106 if isinstance(q, bytes):
107 q = q.decode("utf-8", errors="replace")
108 query.append("q=" + q)
109 if body_serialized:
110 if isinstance(body_serialized, bytes):
111 query.append(body_serialized.decode("utf-8", errors="replace"))
112 else:
113 query.append(body_serialized)
114 if query:
115 context["db"]["statement"] = "\n\n".join(query)
116
117 # ES5: `host` is URL, no `port` attribute
118 # ES6, ES7: `host` URL, `hostname` is host, `port` is port
119 # ES8: `host` is hostname, no `hostname` attribute, `port` is `port`
120 if not hasattr(instance, "port"):
121 # ES5, parse hostname and port from URL stored in `host`
122 parsed_url = urlparse(instance.host)
123 host = parsed_url.hostname
124 port = parsed_url.port
125 elif not hasattr(instance, "hostname"):
126 # ES8 (and up, one can hope)
127 host = instance.host
128 port = instance.port
129 else:
130 # ES6, ES7
131 host = instance.hostname
132 port = instance.port
133
134 context["destination"] = {"address": host, "port": port}
135
136
137 class ElasticsearchTransportInstrumentation(AbstractInstrumentedModule):
138 name = "elasticsearch_connection"
139
140 def get_instrument_list(self):
141 try:
142 import elastic_transport # noqa: F401
143
144 return [
145 ("elastic_transport", "Transport.perform_request"),
146 ]
147 except ImportError:
148 return [
149 ("elasticsearch.transport", "Transport.perform_request"),
150 ]
151
152 def call(self, module, method, wrapped, instance, args, kwargs):
153 with elasticapm.capture_span(
154 self._get_signature(args, kwargs),
155 span_type="db",
156 span_subtype="elasticsearch",
157 span_action="query",
158 extra={},
159 skip_frames=2,
160 leaf=True,
161 ) as span:
162 result_data = wrapped(*args, **kwargs)
163
164 hits = self._get_hits(result_data)
165 if hits:
166 span.context["db"]["rows_affected"] = hits
167
168 return result_data
169
170 def _get_signature(self, args, kwargs):
171 args_len = len(args)
172 http_method = args[0] if args_len else kwargs.get("method")
173 http_path = args[1] if args_len > 1 else kwargs.get("url")
174 http_path = http_path.split("?", 1)[0] # we don't want to capture a potential query string in the span name
175
176 return "ES %s %s" % (http_method, http_path)
177
178 def _get_hits(self, result) -> Optional[int]:
179 if getattr(result, "body", None) and "hits" in result.body: # ES >= 8
180 return result.body["hits"].get("total", {}).get("value")
181 elif isinstance(result, dict) and "hits" in result and "total" in result["hits"]:
182 return (
183 result["hits"]["total"]["value"]
184 if isinstance(result["hits"]["total"], dict)
185 else result["hits"]["total"]
186 )
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py
--- a/elasticapm/instrumentation/packages/elasticsearch.py
+++ b/elasticapm/instrumentation/packages/elasticsearch.py
@@ -163,7 +163,7 @@
hits = self._get_hits(result_data)
if hits:
- span.context["db"]["rows_affected"] = hits
+ span.update_context("db", {"rows_affected": hits})
return result_data
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/elasticsearch.py b/elasticapm/instrumentation/packages/elasticsearch.py\n--- a/elasticapm/instrumentation/packages/elasticsearch.py\n+++ b/elasticapm/instrumentation/packages/elasticsearch.py\n@@ -163,7 +163,7 @@\n \n hits = self._get_hits(result_data)\n if hits:\n- span.context[\"db\"][\"rows_affected\"] = hits\n+ span.update_context(\"db\", {\"rows_affected\": hits})\n \n return result_data\n", "issue": "KeyError: 'db' for DroppedSpan when running scan query on Elasticsearch\n**Describe the bug**: ...\r\n\r\nElastic APM fails with a `KeyError: db key not found`.\r\n\r\nThe application where this happens is a Django project that stores/reads data from Elasticsearch. I have APM enable (APM server and Elasticsearch cluster are both running on Elastic Cloud). The library fails with the aforementioned error (shown in the snippet and screenshots below) while running a scan query on Elasticsearch. It looks like it's dropping some spans, which ends up in this case:\r\n\r\n```\r\n hits = self._get_hits(result_data)\r\n if hits:\r\n span.context[\"db\"][\"rows_affected\"] = hits\r\n \r\n return result_data\r\n```\r\n\r\nhere's a screenshot of what I see on the APM Error page:\r\n\r\n\r\n\r\nFew variables from the context:\r\n\r\n\r\n\r\n\r\n**To Reproduce**\r\n\r\nUnfortunately, I don't have a reproducible snippet. \r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux (containerized)\r\n- Python version: 3.9.15\r\n- Framework and version [e.g. Django 2.1]:\r\n- APM Server version: 7.17.4\r\n- Agent version: \r\n\r\n```\r\n$ pip freeze | grep elastic\r\ndjango-elasticsearch-dsl==7.2.2\r\nelastic-apm==6.13.1\r\nelasticsearch==7.17.4\r\nelasticsearch-dsl==7.4.0\r\n```\r\n\r\n\r\n**Additional context**\r\n\r\nAdd any other context about the problem here.\r\n\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nimport re\nfrom typing import Optional\nfrom urllib.parse import parse_qs, urlparse\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\nshould_capture_body_re = re.compile(\"/(_search|_msearch|_count|_async_search|_sql|_eql)(/|$)\")\n\n\nclass ElasticsearchConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport._node._http_urllib3\", \"Urllib3HttpNode.perform_request\"),\n (\"elastic_transport._node._http_requests\", \"RequestsHttpNode.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.connection.http_urllib3\", \"Urllib3HttpConnection.perform_request\"),\n (\"elasticsearch.connection.http_requests\", \"RequestsHttpConnection.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if not span or isinstance(span, DroppedSpan):\n return wrapped(*args, **kwargs)\n\n self._update_context_by_request_data(span.context, instance, args, kwargs)\n\n result = wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n cluster = result.meta.headers.get(\"x-found-handling-cluster\")\n else:\n status_code = result[0]\n cluster = result[1].get(\"x-found-handling-cluster\")\n span.context[\"http\"] = {\"status_code\": status_code}\n if cluster:\n span.context[\"db\"] = {\"instance\": cluster}\n\n return result\n\n def _update_context_by_request_data(self, context, instance, args, kwargs):\n args_len = len(args)\n url = args[1] if args_len > 1 else kwargs.get(\"url\")\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body_serialized = args[3] if args_len > 3 else kwargs.get(\"body\")\n\n if \"?\" in url and not params:\n url, qs = url.split(\"?\", 1)\n params = {k: v[0] for k, v in parse_qs(qs).items()}\n\n should_capture_body = bool(should_capture_body_re.search(url))\n\n context[\"db\"] = {\"type\": \"elasticsearch\"}\n if should_capture_body:\n query = []\n # using both q AND body is allowed in some API endpoints / ES versions,\n # but not in others. We simply capture both if they are there so the\n # user can see it.\n if params and \"q\" in params:\n # 'q' may already be encoded to a byte string at this point.\n # We assume utf8, which is the default\n q = params[\"q\"]\n if isinstance(q, bytes):\n q = q.decode(\"utf-8\", errors=\"replace\")\n query.append(\"q=\" + q)\n if body_serialized:\n if isinstance(body_serialized, bytes):\n query.append(body_serialized.decode(\"utf-8\", errors=\"replace\"))\n else:\n query.append(body_serialized)\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n\n # ES5: `host` is URL, no `port` attribute\n # ES6, ES7: `host` URL, `hostname` is host, `port` is port\n # ES8: `host` is hostname, no `hostname` attribute, `port` is `port`\n if not hasattr(instance, \"port\"):\n # ES5, parse hostname and port from URL stored in `host`\n parsed_url = urlparse(instance.host)\n host = parsed_url.hostname\n port = parsed_url.port\n elif not hasattr(instance, \"hostname\"):\n # ES8 (and up, one can hope)\n host = instance.host\n port = instance.port\n else:\n # ES6, ES7\n host = instance.hostname\n port = instance.port\n\n context[\"destination\"] = {\"address\": host, \"port\": port}\n\n\nclass ElasticsearchTransportInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport\", \"Transport.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.transport\", \"Transport.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n with elasticapm.capture_span(\n self._get_signature(args, kwargs),\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra={},\n skip_frames=2,\n leaf=True,\n ) as span:\n result_data = wrapped(*args, **kwargs)\n\n hits = self._get_hits(result_data)\n if hits:\n span.context[\"db\"][\"rows_affected\"] = hits\n\n return result_data\n\n def _get_signature(self, args, kwargs):\n args_len = len(args)\n http_method = args[0] if args_len else kwargs.get(\"method\")\n http_path = args[1] if args_len > 1 else kwargs.get(\"url\")\n http_path = http_path.split(\"?\", 1)[0] # we don't want to capture a potential query string in the span name\n\n return \"ES %s %s\" % (http_method, http_path)\n\n def _get_hits(self, result) -> Optional[int]:\n if getattr(result, \"body\", None) and \"hits\" in result.body: # ES >= 8\n return result.body[\"hits\"].get(\"total\", {}).get(\"value\")\n elif isinstance(result, dict) and \"hits\" in result and \"total\" in result[\"hits\"]:\n return (\n result[\"hits\"][\"total\"][\"value\"]\n if isinstance(result[\"hits\"][\"total\"], dict)\n else result[\"hits\"][\"total\"]\n )\n", "path": "elasticapm/instrumentation/packages/elasticsearch.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom __future__ import absolute_import\n\nimport re\nfrom typing import Optional\nfrom urllib.parse import parse_qs, urlparse\n\nimport elasticapm\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import DroppedSpan, execution_context\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\nshould_capture_body_re = re.compile(\"/(_search|_msearch|_count|_async_search|_sql|_eql)(/|$)\")\n\n\nclass ElasticsearchConnectionInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport._node._http_urllib3\", \"Urllib3HttpNode.perform_request\"),\n (\"elastic_transport._node._http_requests\", \"RequestsHttpNode.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.connection.http_urllib3\", \"Urllib3HttpConnection.perform_request\"),\n (\"elasticsearch.connection.http_requests\", \"RequestsHttpConnection.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n span = execution_context.get_span()\n if not span or isinstance(span, DroppedSpan):\n return wrapped(*args, **kwargs)\n\n self._update_context_by_request_data(span.context, instance, args, kwargs)\n\n result = wrapped(*args, **kwargs)\n if hasattr(result, \"meta\"): # elasticsearch-py 8.x+\n status_code = result.meta.status\n cluster = result.meta.headers.get(\"x-found-handling-cluster\")\n else:\n status_code = result[0]\n cluster = result[1].get(\"x-found-handling-cluster\")\n span.context[\"http\"] = {\"status_code\": status_code}\n if cluster:\n span.context[\"db\"] = {\"instance\": cluster}\n\n return result\n\n def _update_context_by_request_data(self, context, instance, args, kwargs):\n args_len = len(args)\n url = args[1] if args_len > 1 else kwargs.get(\"url\")\n params = args[2] if args_len > 2 else kwargs.get(\"params\")\n body_serialized = args[3] if args_len > 3 else kwargs.get(\"body\")\n\n if \"?\" in url and not params:\n url, qs = url.split(\"?\", 1)\n params = {k: v[0] for k, v in parse_qs(qs).items()}\n\n should_capture_body = bool(should_capture_body_re.search(url))\n\n context[\"db\"] = {\"type\": \"elasticsearch\"}\n if should_capture_body:\n query = []\n # using both q AND body is allowed in some API endpoints / ES versions,\n # but not in others. We simply capture both if they are there so the\n # user can see it.\n if params and \"q\" in params:\n # 'q' may already be encoded to a byte string at this point.\n # We assume utf8, which is the default\n q = params[\"q\"]\n if isinstance(q, bytes):\n q = q.decode(\"utf-8\", errors=\"replace\")\n query.append(\"q=\" + q)\n if body_serialized:\n if isinstance(body_serialized, bytes):\n query.append(body_serialized.decode(\"utf-8\", errors=\"replace\"))\n else:\n query.append(body_serialized)\n if query:\n context[\"db\"][\"statement\"] = \"\\n\\n\".join(query)\n\n # ES5: `host` is URL, no `port` attribute\n # ES6, ES7: `host` URL, `hostname` is host, `port` is port\n # ES8: `host` is hostname, no `hostname` attribute, `port` is `port`\n if not hasattr(instance, \"port\"):\n # ES5, parse hostname and port from URL stored in `host`\n parsed_url = urlparse(instance.host)\n host = parsed_url.hostname\n port = parsed_url.port\n elif not hasattr(instance, \"hostname\"):\n # ES8 (and up, one can hope)\n host = instance.host\n port = instance.port\n else:\n # ES6, ES7\n host = instance.hostname\n port = instance.port\n\n context[\"destination\"] = {\"address\": host, \"port\": port}\n\n\nclass ElasticsearchTransportInstrumentation(AbstractInstrumentedModule):\n name = \"elasticsearch_connection\"\n\n def get_instrument_list(self):\n try:\n import elastic_transport # noqa: F401\n\n return [\n (\"elastic_transport\", \"Transport.perform_request\"),\n ]\n except ImportError:\n return [\n (\"elasticsearch.transport\", \"Transport.perform_request\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n with elasticapm.capture_span(\n self._get_signature(args, kwargs),\n span_type=\"db\",\n span_subtype=\"elasticsearch\",\n span_action=\"query\",\n extra={},\n skip_frames=2,\n leaf=True,\n ) as span:\n result_data = wrapped(*args, **kwargs)\n\n hits = self._get_hits(result_data)\n if hits:\n span.update_context(\"db\", {\"rows_affected\": hits})\n\n return result_data\n\n def _get_signature(self, args, kwargs):\n args_len = len(args)\n http_method = args[0] if args_len else kwargs.get(\"method\")\n http_path = args[1] if args_len > 1 else kwargs.get(\"url\")\n http_path = http_path.split(\"?\", 1)[0] # we don't want to capture a potential query string in the span name\n\n return \"ES %s %s\" % (http_method, http_path)\n\n def _get_hits(self, result) -> Optional[int]:\n if getattr(result, \"body\", None) and \"hits\" in result.body: # ES >= 8\n return result.body[\"hits\"].get(\"total\", {}).get(\"value\")\n elif isinstance(result, dict) and \"hits\" in result and \"total\" in result[\"hits\"]:\n return (\n result[\"hits\"][\"total\"][\"value\"]\n if isinstance(result[\"hits\"][\"total\"], dict)\n else result[\"hits\"][\"total\"]\n )\n", "path": "elasticapm/instrumentation/packages/elasticsearch.py"}]} | 2,870 | 120 |
gh_patches_debug_14724 | rasdani/github-patches | git_diff | scikit-hep__pyhf-235 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
loosen numpy requirements for non-extra installs
# Description
we are pretty restrictive in the numpy version range due to trying to conform to TF's valid range, but TF is only one of the backends. If just installing `pip install pyhf` we should not force users to a speciic range unless we require the APIs
`numpy>=1.14.0` should be enough unless i'm missing something. @kratsg since you changed this last, any reason you see to restrict numpy further?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 setup(
3 name = 'pyhf',
4 version = '0.0.15',
5 description = '(partial) pure python histfactory implementation',
6 url = '',
7 author = 'Lukas Heinrich',
8 author_email = '[email protected]',
9 packages = find_packages(),
10 include_package_data = True,
11 install_requires = [
12 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us
13 'scipy',
14 'click>=6.0', # for console scripts,
15 'tqdm', # for readxml
16 'six', # for modifiers
17 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
18 ],
19 extras_require = {
20 'xmlimport': [
21 'uproot',
22 ],
23 'torch': [
24 'torch>=0.4.0'
25 ],
26 'mxnet':[
27 'mxnet>=1.0.0',
28 'requests<2.19.0,>=2.18.4',
29 'numpy<1.15.0,>=1.8.2',
30 'requests<2.19.0,>=2.18.4',
31 ],
32 'tensorflow':[
33 'tensorflow>=1.10.0',
34 'numpy<=1.14.5,>=1.13.3',
35 'setuptools<=39.1.0',
36 ],
37 'develop': [
38 'pyflakes',
39 'pytest>=3.5.1',
40 'pytest-cov>=2.5.1',
41 'pytest-benchmark[histogram]',
42 'pytest-console-scripts',
43 'python-coveralls',
44 'coverage>=4.0', # coveralls
45 'matplotlib',
46 'jupyter',
47 'uproot',
48 'papermill',
49 'graphviz',
50 'sphinx',
51 'sphinxcontrib-bibtex',
52 'sphinxcontrib-napoleon',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'jsonpatch'
56 ]
57 },
58 entry_points = {
59 'console_scripts': ['pyhf=pyhf.commandline:pyhf']
60 },
61 dependency_links = [
62 ]
63 )
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,8 +9,7 @@
packages = find_packages(),
include_package_data = True,
install_requires = [
- 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us
- 'scipy',
+ 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
'click>=6.0', # for console scripts,
'tqdm', # for readxml
'six', # for modifiers
@@ -31,7 +30,7 @@
],
'tensorflow':[
'tensorflow>=1.10.0',
- 'numpy<=1.14.5,>=1.13.3',
+ 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
'setuptools<=39.1.0',
],
'develop': [
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,8 +9,7 @@\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n- 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us\n- 'scipy',\n+ 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n@@ -31,7 +30,7 @@\n ],\n 'tensorflow':[\n 'tensorflow>=1.10.0',\n- 'numpy<=1.14.5,>=1.13.3',\n+ 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'develop': [\n", "issue": "loosen numpy requirements for non-extra installs\n# Description\r\n\r\nwe are pretty restrictive in the numpy version range due to trying to conform to TF's valid range, but TF is only one of the backends. If just installing `pip install pyhf` we should not force users to a speciic range unless we require the APIs\r\n\r\n`numpy>=1.14.0` should be enough unless i'm missing something. @kratsg since you changed this last, any reason you see to restrict numpy further?\n", "before_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.15',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'numpy<=1.14.5,>=1.14.3', # required by tensorflow, mxnet, and us\n 'scipy',\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch>=0.4.0'\n ],\n 'mxnet':[\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n 'tensorflow':[\n 'tensorflow>=1.10.0',\n 'numpy<=1.14.5,>=1.13.3',\n 'setuptools<=39.1.0',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonpatch'\n ]\n },\n entry_points = {\n 'console_scripts': ['pyhf=pyhf.commandline:pyhf']\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.15',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch>=0.4.0'\n ],\n 'mxnet':[\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n 'tensorflow':[\n 'tensorflow>=1.10.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonpatch'\n ]\n },\n entry_points = {\n 'console_scripts': ['pyhf=pyhf.commandline:pyhf']\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}]} | 995 | 260 |
gh_patches_debug_38469 | rasdani/github-patches | git_diff | scrapy__scrapy-1267 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Backward incompatibility for relocated paths in settings
Reported by @dangra
This issue manifests when mixing old paths and new ones for extensions and middlewares (this can happen for example while using a newer version of Scrapy in a project that hasn't updated to the new paths yet). Since paths aren't normalized, the same component can be loaded twice.
Take these settings for example:
``` python
# scrapy/settings/default_settings.py
EXTENSIONS_BASE = {
'scrapy.extensions.debug.StackTraceDump': 100, # new path
}
```
``` python
# myproject/settings.py
EXTENSIONS = {
'scrapy.contrib.debug.StackTraceDump': 200, # old path
}
```
While merging both dictionaries to build the list of components, the same StackTraceDump class is going to be loaded twice since it appears in two different keys.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/deprecate.py`
Content:
```
1 """Some helpers for deprecation messages"""
2
3 import warnings
4 import inspect
5 from scrapy.exceptions import ScrapyDeprecationWarning
6
7
8 def attribute(obj, oldattr, newattr, version='0.12'):
9 cname = obj.__class__.__name__
10 warnings.warn("%s.%s attribute is deprecated and will be no longer supported "
11 "in Scrapy %s, use %s.%s attribute instead" % \
12 (cname, oldattr, version, cname, newattr), ScrapyDeprecationWarning, stacklevel=3)
13
14
15 def create_deprecated_class(name, new_class, clsdict=None,
16 warn_category=ScrapyDeprecationWarning,
17 warn_once=True,
18 old_class_path=None,
19 new_class_path=None,
20 subclass_warn_message="{cls} inherits from "\
21 "deprecated class {old}, please inherit "\
22 "from {new}.",
23 instance_warn_message="{cls} is deprecated, "\
24 "instantiate {new} instead."):
25 """
26 Return a "deprecated" class that causes its subclasses to issue a warning.
27 Subclasses of ``new_class`` are considered subclasses of this class.
28 It also warns when the deprecated class is instantiated, but do not when
29 its subclasses are instantiated.
30
31 It can be used to rename a base class in a library. For example, if we
32 have
33
34 class OldName(SomeClass):
35 # ...
36
37 and we want to rename it to NewName, we can do the following::
38
39 class NewName(SomeClass):
40 # ...
41
42 OldName = create_deprecated_class('OldName', NewName)
43
44 Then, if user class inherits from OldName, warning is issued. Also, if
45 some code uses ``issubclass(sub, OldName)`` or ``isinstance(sub(), OldName)``
46 checks they'll still return True if sub is a subclass of NewName instead of
47 OldName.
48 """
49
50 class DeprecatedClass(new_class.__class__):
51
52 deprecated_class = None
53 warned_on_subclass = False
54
55 def __new__(metacls, name, bases, clsdict_):
56 cls = super(DeprecatedClass, metacls).__new__(metacls, name, bases, clsdict_)
57 if metacls.deprecated_class is None:
58 metacls.deprecated_class = cls
59 return cls
60
61 def __init__(cls, name, bases, clsdict_):
62 meta = cls.__class__
63 old = meta.deprecated_class
64 if old in bases and not (warn_once and meta.warned_on_subclass):
65 meta.warned_on_subclass = True
66 msg = subclass_warn_message.format(cls=_clspath(cls),
67 old=_clspath(old, old_class_path),
68 new=_clspath(new_class, new_class_path))
69 if warn_once:
70 msg += ' (warning only on first subclass, there may be others)'
71 warnings.warn(msg, warn_category, stacklevel=2)
72 super(DeprecatedClass, cls).__init__(name, bases, clsdict_)
73
74 # see http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass
75 # and http://docs.python.org/2/reference/datamodel.html#customizing-instance-and-subclass-checks
76 # for implementation details
77 def __instancecheck__(cls, inst):
78 return any(cls.__subclasscheck__(c)
79 for c in {type(inst), inst.__class__})
80
81 def __subclasscheck__(cls, sub):
82 if cls is not DeprecatedClass.deprecated_class:
83 # we should do the magic only if second `issubclass` argument
84 # is the deprecated class itself - subclasses of the
85 # deprecated class should not use custom `__subclasscheck__`
86 # method.
87 return super(DeprecatedClass, cls).__subclasscheck__(sub)
88
89 if not inspect.isclass(sub):
90 raise TypeError("issubclass() arg 1 must be a class")
91
92 mro = getattr(sub, '__mro__', ())
93 return any(c in {cls, new_class} for c in mro)
94
95 def __call__(cls, *args, **kwargs):
96 old = DeprecatedClass.deprecated_class
97 if cls is old:
98 msg = instance_warn_message.format(cls=_clspath(cls, old_class_path),
99 new=_clspath(new_class, new_class_path))
100 warnings.warn(msg, warn_category, stacklevel=2)
101 return super(DeprecatedClass, cls).__call__(*args, **kwargs)
102
103 deprecated_cls = DeprecatedClass(name, (new_class,), clsdict or {})
104
105 try:
106 frm = inspect.stack()[1]
107 parent_module = inspect.getmodule(frm[0])
108 if parent_module is not None:
109 deprecated_cls.__module__ = parent_module.__name__
110 except Exception as e:
111 # Sometimes inspect.stack() fails (e.g. when the first import of
112 # deprecated class is in jinja2 template). __module__ attribute is not
113 # important enough to raise an exception as users may be unable
114 # to fix inspect.stack() errors.
115 warnings.warn("Error detecting parent module: %r" % e)
116
117 return deprecated_cls
118
119
120 def _clspath(cls, forced=None):
121 if forced is not None:
122 return forced
123 return '{}.{}'.format(cls.__module__, cls.__name__)
124
```
Path: `scrapy/utils/conf.py`
Content:
```
1 import os
2 import sys
3 from operator import itemgetter
4
5 import six
6 from six.moves.configparser import SafeConfigParser
7
8
9 def build_component_list(base, custom):
10 """Compose a component list based on a custom and base dict of components
11 (typically middlewares or extensions), unless custom is already a list, in
12 which case it's returned.
13 """
14 if isinstance(custom, (list, tuple)):
15 return custom
16 compdict = base.copy()
17 compdict.update(custom)
18 items = (x for x in six.iteritems(compdict) if x[1] is not None)
19 return [x[0] for x in sorted(items, key=itemgetter(1))]
20
21
22 def arglist_to_dict(arglist):
23 """Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a
24 dict
25 """
26 return dict(x.split('=', 1) for x in arglist)
27
28
29 def closest_scrapy_cfg(path='.', prevpath=None):
30 """Return the path to the closest scrapy.cfg file by traversing the current
31 directory and its parents
32 """
33 if path == prevpath:
34 return ''
35 path = os.path.abspath(path)
36 cfgfile = os.path.join(path, 'scrapy.cfg')
37 if os.path.exists(cfgfile):
38 return cfgfile
39 return closest_scrapy_cfg(os.path.dirname(path), path)
40
41
42 def init_env(project='default', set_syspath=True):
43 """Initialize environment to use command-line tool from inside a project
44 dir. This sets the Scrapy settings module and modifies the Python path to
45 be able to locate the project module.
46 """
47 cfg = get_config()
48 if cfg.has_option('settings', project):
49 os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)
50 closest = closest_scrapy_cfg()
51 if closest:
52 projdir = os.path.dirname(closest)
53 if set_syspath and projdir not in sys.path:
54 sys.path.append(projdir)
55
56
57 def get_config(use_closest=True):
58 """Get Scrapy config file as a SafeConfigParser"""
59 sources = get_sources(use_closest)
60 cfg = SafeConfigParser()
61 cfg.read(sources)
62 return cfg
63
64
65 def get_sources(use_closest=True):
66 xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \
67 os.path.expanduser('~/.config')
68 sources = ['/etc/scrapy.cfg', r'c:\scrapy\scrapy.cfg',
69 xdg_config_home + '/scrapy.cfg',
70 os.path.expanduser('~/.scrapy.cfg')]
71 if use_closest:
72 sources.append(closest_scrapy_cfg())
73 return sources
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py
--- a/scrapy/utils/conf.py
+++ b/scrapy/utils/conf.py
@@ -5,16 +5,30 @@
import six
from six.moves.configparser import SafeConfigParser
+from scrapy.utils.deprecate import update_classpath
-def build_component_list(base, custom):
+
+def build_component_list(base, custom, convert=update_classpath):
"""Compose a component list based on a custom and base dict of components
(typically middlewares or extensions), unless custom is already a list, in
which case it's returned.
"""
+
+ def _check_components(complist):
+ if len({convert(c) for c in complist}) != len(complist):
+ raise ValueError('Some paths in {!r} convert to the same object, '
+ 'please update your settings'.format(complist))
+
if isinstance(custom, (list, tuple)):
- return custom
- compdict = base.copy()
- compdict.update(custom)
+ _check_components(custom)
+ return type(custom)(convert(c) for c in custom)
+
+ def _map_keys(compdict):
+ _check_components(compdict)
+ return {convert(k): v for k, v in six.iteritems(compdict)}
+
+ compdict = _map_keys(base)
+ compdict.update(_map_keys(custom))
items = (x for x in six.iteritems(compdict) if x[1] is not None)
return [x[0] for x in sorted(items, key=itemgetter(1))]
diff --git a/scrapy/utils/deprecate.py b/scrapy/utils/deprecate.py
--- a/scrapy/utils/deprecate.py
+++ b/scrapy/utils/deprecate.py
@@ -121,3 +121,37 @@
if forced is not None:
return forced
return '{}.{}'.format(cls.__module__, cls.__name__)
+
+
+DEPRECATION_RULES = [
+ ('scrapy.contrib_exp.downloadermiddleware.decompression.', 'scrapy.downloadermiddlewares.decompression.'),
+ ('scrapy.contrib_exp.iterators.', 'scrapy.utils.iterators.'),
+ ('scrapy.contrib.downloadermiddleware.', 'scrapy.downloadermiddlewares.'),
+ ('scrapy.contrib.exporter.', 'scrapy.exporters.'),
+ ('scrapy.contrib.linkextractors.', 'scrapy.linkextractors.'),
+ ('scrapy.contrib.loader.processor.', 'scrapy.loader.processors.'),
+ ('scrapy.contrib.loader.', 'scrapy.loader.'),
+ ('scrapy.contrib.pipeline.', 'scrapy.pipelines.'),
+ ('scrapy.contrib.spidermiddleware.', 'scrapy.spidermiddlewares.'),
+ ('scrapy.contrib.spiders.', 'scrapy.spiders.'),
+ ('scrapy.contrib.', 'scrapy.extensions.'),
+ ('scrapy.command.', 'scrapy.commands.'),
+ ('scrapy.dupefilter.', 'scrapy.dupefilters.'),
+ ('scrapy.linkextractor.', 'scrapy.linkextractors.'),
+ ('scrapy.spider.', 'scrapy.spiders.'),
+ ('scrapy.squeue.', 'scrapy.squeues.'),
+ ('scrapy.statscol.', 'scrapy.statscollectors.'),
+ ('scrapy.utils.decorator.', 'scrapy.utils.decorators.'),
+ ('scrapy.spidermanager.SpiderManager', 'scrapy.spiderloader.SpiderLoader'),
+]
+
+
+def update_classpath(path):
+ """Update a deprecated path from an object with its new location"""
+ for prefix, replacement in DEPRECATION_RULES:
+ if path.startswith(prefix):
+ new_path = path.replace(prefix, replacement, 1)
+ warnings.warn("`{}` class is deprecated, use `{}` instead".format(path, new_path),
+ ScrapyDeprecationWarning)
+ return new_path
+ return path
| {"golden_diff": "diff --git a/scrapy/utils/conf.py b/scrapy/utils/conf.py\n--- a/scrapy/utils/conf.py\n+++ b/scrapy/utils/conf.py\n@@ -5,16 +5,30 @@\n import six\n from six.moves.configparser import SafeConfigParser\n \n+from scrapy.utils.deprecate import update_classpath\n \n-def build_component_list(base, custom):\n+\n+def build_component_list(base, custom, convert=update_classpath):\n \"\"\"Compose a component list based on a custom and base dict of components\n (typically middlewares or extensions), unless custom is already a list, in\n which case it's returned.\n \"\"\"\n+\n+ def _check_components(complist):\n+ if len({convert(c) for c in complist}) != len(complist):\n+ raise ValueError('Some paths in {!r} convert to the same object, '\n+ 'please update your settings'.format(complist))\n+\n if isinstance(custom, (list, tuple)):\n- return custom\n- compdict = base.copy()\n- compdict.update(custom)\n+ _check_components(custom)\n+ return type(custom)(convert(c) for c in custom)\n+\n+ def _map_keys(compdict):\n+ _check_components(compdict)\n+ return {convert(k): v for k, v in six.iteritems(compdict)}\n+\n+ compdict = _map_keys(base)\n+ compdict.update(_map_keys(custom))\n items = (x for x in six.iteritems(compdict) if x[1] is not None)\n return [x[0] for x in sorted(items, key=itemgetter(1))]\n \ndiff --git a/scrapy/utils/deprecate.py b/scrapy/utils/deprecate.py\n--- a/scrapy/utils/deprecate.py\n+++ b/scrapy/utils/deprecate.py\n@@ -121,3 +121,37 @@\n if forced is not None:\n return forced\n return '{}.{}'.format(cls.__module__, cls.__name__)\n+\n+\n+DEPRECATION_RULES = [\n+ ('scrapy.contrib_exp.downloadermiddleware.decompression.', 'scrapy.downloadermiddlewares.decompression.'),\n+ ('scrapy.contrib_exp.iterators.', 'scrapy.utils.iterators.'),\n+ ('scrapy.contrib.downloadermiddleware.', 'scrapy.downloadermiddlewares.'),\n+ ('scrapy.contrib.exporter.', 'scrapy.exporters.'),\n+ ('scrapy.contrib.linkextractors.', 'scrapy.linkextractors.'),\n+ ('scrapy.contrib.loader.processor.', 'scrapy.loader.processors.'),\n+ ('scrapy.contrib.loader.', 'scrapy.loader.'),\n+ ('scrapy.contrib.pipeline.', 'scrapy.pipelines.'),\n+ ('scrapy.contrib.spidermiddleware.', 'scrapy.spidermiddlewares.'),\n+ ('scrapy.contrib.spiders.', 'scrapy.spiders.'),\n+ ('scrapy.contrib.', 'scrapy.extensions.'),\n+ ('scrapy.command.', 'scrapy.commands.'),\n+ ('scrapy.dupefilter.', 'scrapy.dupefilters.'),\n+ ('scrapy.linkextractor.', 'scrapy.linkextractors.'),\n+ ('scrapy.spider.', 'scrapy.spiders.'),\n+ ('scrapy.squeue.', 'scrapy.squeues.'),\n+ ('scrapy.statscol.', 'scrapy.statscollectors.'),\n+ ('scrapy.utils.decorator.', 'scrapy.utils.decorators.'),\n+ ('scrapy.spidermanager.SpiderManager', 'scrapy.spiderloader.SpiderLoader'),\n+]\n+\n+\n+def update_classpath(path):\n+ \"\"\"Update a deprecated path from an object with its new location\"\"\"\n+ for prefix, replacement in DEPRECATION_RULES:\n+ if path.startswith(prefix):\n+ new_path = path.replace(prefix, replacement, 1)\n+ warnings.warn(\"`{}` class is deprecated, use `{}` instead\".format(path, new_path),\n+ ScrapyDeprecationWarning)\n+ return new_path\n+ return path\n", "issue": "Backward incompatibility for relocated paths in settings\nReported by @dangra\n\nThis issue manifests when mixing old paths and new ones for extensions and middlewares (this can happen for example while using a newer version of Scrapy in a project that hasn't updated to the new paths yet). Since paths aren't normalized, the same component can be loaded twice.\n\nTake these settings for example:\n\n``` python\n# scrapy/settings/default_settings.py\nEXTENSIONS_BASE = {\n 'scrapy.extensions.debug.StackTraceDump': 100, # new path\n} \n```\n\n``` python\n# myproject/settings.py\nEXTENSIONS = {\n 'scrapy.contrib.debug.StackTraceDump': 200, # old path\n}\n```\n\nWhile merging both dictionaries to build the list of components, the same StackTraceDump class is going to be loaded twice since it appears in two different keys. \n\n", "before_files": [{"content": "\"\"\"Some helpers for deprecation messages\"\"\"\n\nimport warnings\nimport inspect\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\n\ndef attribute(obj, oldattr, newattr, version='0.12'):\n cname = obj.__class__.__name__\n warnings.warn(\"%s.%s attribute is deprecated and will be no longer supported \"\n \"in Scrapy %s, use %s.%s attribute instead\" % \\\n (cname, oldattr, version, cname, newattr), ScrapyDeprecationWarning, stacklevel=3)\n\n\ndef create_deprecated_class(name, new_class, clsdict=None,\n warn_category=ScrapyDeprecationWarning,\n warn_once=True,\n old_class_path=None,\n new_class_path=None,\n subclass_warn_message=\"{cls} inherits from \"\\\n \"deprecated class {old}, please inherit \"\\\n \"from {new}.\",\n instance_warn_message=\"{cls} is deprecated, \"\\\n \"instantiate {new} instead.\"):\n \"\"\"\n Return a \"deprecated\" class that causes its subclasses to issue a warning.\n Subclasses of ``new_class`` are considered subclasses of this class.\n It also warns when the deprecated class is instantiated, but do not when\n its subclasses are instantiated.\n\n It can be used to rename a base class in a library. For example, if we\n have\n\n class OldName(SomeClass):\n # ...\n\n and we want to rename it to NewName, we can do the following::\n\n class NewName(SomeClass):\n # ...\n\n OldName = create_deprecated_class('OldName', NewName)\n\n Then, if user class inherits from OldName, warning is issued. Also, if\n some code uses ``issubclass(sub, OldName)`` or ``isinstance(sub(), OldName)``\n checks they'll still return True if sub is a subclass of NewName instead of\n OldName.\n \"\"\"\n\n class DeprecatedClass(new_class.__class__):\n\n deprecated_class = None\n warned_on_subclass = False\n\n def __new__(metacls, name, bases, clsdict_):\n cls = super(DeprecatedClass, metacls).__new__(metacls, name, bases, clsdict_)\n if metacls.deprecated_class is None:\n metacls.deprecated_class = cls\n return cls\n\n def __init__(cls, name, bases, clsdict_):\n meta = cls.__class__\n old = meta.deprecated_class\n if old in bases and not (warn_once and meta.warned_on_subclass):\n meta.warned_on_subclass = True\n msg = subclass_warn_message.format(cls=_clspath(cls),\n old=_clspath(old, old_class_path),\n new=_clspath(new_class, new_class_path))\n if warn_once:\n msg += ' (warning only on first subclass, there may be others)'\n warnings.warn(msg, warn_category, stacklevel=2)\n super(DeprecatedClass, cls).__init__(name, bases, clsdict_)\n\n # see http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass\n # and http://docs.python.org/2/reference/datamodel.html#customizing-instance-and-subclass-checks\n # for implementation details\n def __instancecheck__(cls, inst):\n return any(cls.__subclasscheck__(c)\n for c in {type(inst), inst.__class__})\n\n def __subclasscheck__(cls, sub):\n if cls is not DeprecatedClass.deprecated_class:\n # we should do the magic only if second `issubclass` argument\n # is the deprecated class itself - subclasses of the\n # deprecated class should not use custom `__subclasscheck__`\n # method.\n return super(DeprecatedClass, cls).__subclasscheck__(sub)\n\n if not inspect.isclass(sub):\n raise TypeError(\"issubclass() arg 1 must be a class\")\n\n mro = getattr(sub, '__mro__', ())\n return any(c in {cls, new_class} for c in mro)\n\n def __call__(cls, *args, **kwargs):\n old = DeprecatedClass.deprecated_class\n if cls is old:\n msg = instance_warn_message.format(cls=_clspath(cls, old_class_path),\n new=_clspath(new_class, new_class_path))\n warnings.warn(msg, warn_category, stacklevel=2)\n return super(DeprecatedClass, cls).__call__(*args, **kwargs)\n\n deprecated_cls = DeprecatedClass(name, (new_class,), clsdict or {})\n\n try:\n frm = inspect.stack()[1]\n parent_module = inspect.getmodule(frm[0])\n if parent_module is not None:\n deprecated_cls.__module__ = parent_module.__name__\n except Exception as e:\n # Sometimes inspect.stack() fails (e.g. when the first import of\n # deprecated class is in jinja2 template). __module__ attribute is not\n # important enough to raise an exception as users may be unable\n # to fix inspect.stack() errors.\n warnings.warn(\"Error detecting parent module: %r\" % e)\n\n return deprecated_cls\n\n\ndef _clspath(cls, forced=None):\n if forced is not None:\n return forced\n return '{}.{}'.format(cls.__module__, cls.__name__)\n", "path": "scrapy/utils/deprecate.py"}, {"content": "import os\nimport sys\nfrom operator import itemgetter\n\nimport six\nfrom six.moves.configparser import SafeConfigParser\n\n\ndef build_component_list(base, custom):\n \"\"\"Compose a component list based on a custom and base dict of components\n (typically middlewares or extensions), unless custom is already a list, in\n which case it's returned.\n \"\"\"\n if isinstance(custom, (list, tuple)):\n return custom\n compdict = base.copy()\n compdict.update(custom)\n items = (x for x in six.iteritems(compdict) if x[1] is not None)\n return [x[0] for x in sorted(items, key=itemgetter(1))]\n\n\ndef arglist_to_dict(arglist):\n \"\"\"Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a\n dict\n \"\"\"\n return dict(x.split('=', 1) for x in arglist)\n\n\ndef closest_scrapy_cfg(path='.', prevpath=None):\n \"\"\"Return the path to the closest scrapy.cfg file by traversing the current\n directory and its parents\n \"\"\"\n if path == prevpath:\n return ''\n path = os.path.abspath(path)\n cfgfile = os.path.join(path, 'scrapy.cfg')\n if os.path.exists(cfgfile):\n return cfgfile\n return closest_scrapy_cfg(os.path.dirname(path), path)\n\n\ndef init_env(project='default', set_syspath=True):\n \"\"\"Initialize environment to use command-line tool from inside a project\n dir. This sets the Scrapy settings module and modifies the Python path to\n be able to locate the project module.\n \"\"\"\n cfg = get_config()\n if cfg.has_option('settings', project):\n os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)\n closest = closest_scrapy_cfg()\n if closest:\n projdir = os.path.dirname(closest)\n if set_syspath and projdir not in sys.path:\n sys.path.append(projdir)\n\n\ndef get_config(use_closest=True):\n \"\"\"Get Scrapy config file as a SafeConfigParser\"\"\"\n sources = get_sources(use_closest)\n cfg = SafeConfigParser()\n cfg.read(sources)\n return cfg\n\n\ndef get_sources(use_closest=True):\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \\\n os.path.expanduser('~/.config')\n sources = ['/etc/scrapy.cfg', r'c:\\scrapy\\scrapy.cfg',\n xdg_config_home + '/scrapy.cfg',\n os.path.expanduser('~/.scrapy.cfg')]\n if use_closest:\n sources.append(closest_scrapy_cfg())\n return sources\n", "path": "scrapy/utils/conf.py"}], "after_files": [{"content": "\"\"\"Some helpers for deprecation messages\"\"\"\n\nimport warnings\nimport inspect\nfrom scrapy.exceptions import ScrapyDeprecationWarning\n\n\ndef attribute(obj, oldattr, newattr, version='0.12'):\n cname = obj.__class__.__name__\n warnings.warn(\"%s.%s attribute is deprecated and will be no longer supported \"\n \"in Scrapy %s, use %s.%s attribute instead\" % \\\n (cname, oldattr, version, cname, newattr), ScrapyDeprecationWarning, stacklevel=3)\n\n\ndef create_deprecated_class(name, new_class, clsdict=None,\n warn_category=ScrapyDeprecationWarning,\n warn_once=True,\n old_class_path=None,\n new_class_path=None,\n subclass_warn_message=\"{cls} inherits from \"\\\n \"deprecated class {old}, please inherit \"\\\n \"from {new}.\",\n instance_warn_message=\"{cls} is deprecated, \"\\\n \"instantiate {new} instead.\"):\n \"\"\"\n Return a \"deprecated\" class that causes its subclasses to issue a warning.\n Subclasses of ``new_class`` are considered subclasses of this class.\n It also warns when the deprecated class is instantiated, but do not when\n its subclasses are instantiated.\n\n It can be used to rename a base class in a library. For example, if we\n have\n\n class OldName(SomeClass):\n # ...\n\n and we want to rename it to NewName, we can do the following::\n\n class NewName(SomeClass):\n # ...\n\n OldName = create_deprecated_class('OldName', NewName)\n\n Then, if user class inherits from OldName, warning is issued. Also, if\n some code uses ``issubclass(sub, OldName)`` or ``isinstance(sub(), OldName)``\n checks they'll still return True if sub is a subclass of NewName instead of\n OldName.\n \"\"\"\n\n class DeprecatedClass(new_class.__class__):\n\n deprecated_class = None\n warned_on_subclass = False\n\n def __new__(metacls, name, bases, clsdict_):\n cls = super(DeprecatedClass, metacls).__new__(metacls, name, bases, clsdict_)\n if metacls.deprecated_class is None:\n metacls.deprecated_class = cls\n return cls\n\n def __init__(cls, name, bases, clsdict_):\n meta = cls.__class__\n old = meta.deprecated_class\n if old in bases and not (warn_once and meta.warned_on_subclass):\n meta.warned_on_subclass = True\n msg = subclass_warn_message.format(cls=_clspath(cls),\n old=_clspath(old, old_class_path),\n new=_clspath(new_class, new_class_path))\n if warn_once:\n msg += ' (warning only on first subclass, there may be others)'\n warnings.warn(msg, warn_category, stacklevel=2)\n super(DeprecatedClass, cls).__init__(name, bases, clsdict_)\n\n # see http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass\n # and http://docs.python.org/2/reference/datamodel.html#customizing-instance-and-subclass-checks\n # for implementation details\n def __instancecheck__(cls, inst):\n return any(cls.__subclasscheck__(c)\n for c in {type(inst), inst.__class__})\n\n def __subclasscheck__(cls, sub):\n if cls is not DeprecatedClass.deprecated_class:\n # we should do the magic only if second `issubclass` argument\n # is the deprecated class itself - subclasses of the\n # deprecated class should not use custom `__subclasscheck__`\n # method.\n return super(DeprecatedClass, cls).__subclasscheck__(sub)\n\n if not inspect.isclass(sub):\n raise TypeError(\"issubclass() arg 1 must be a class\")\n\n mro = getattr(sub, '__mro__', ())\n return any(c in {cls, new_class} for c in mro)\n\n def __call__(cls, *args, **kwargs):\n old = DeprecatedClass.deprecated_class\n if cls is old:\n msg = instance_warn_message.format(cls=_clspath(cls, old_class_path),\n new=_clspath(new_class, new_class_path))\n warnings.warn(msg, warn_category, stacklevel=2)\n return super(DeprecatedClass, cls).__call__(*args, **kwargs)\n\n deprecated_cls = DeprecatedClass(name, (new_class,), clsdict or {})\n\n try:\n frm = inspect.stack()[1]\n parent_module = inspect.getmodule(frm[0])\n if parent_module is not None:\n deprecated_cls.__module__ = parent_module.__name__\n except Exception as e:\n # Sometimes inspect.stack() fails (e.g. when the first import of\n # deprecated class is in jinja2 template). __module__ attribute is not\n # important enough to raise an exception as users may be unable\n # to fix inspect.stack() errors.\n warnings.warn(\"Error detecting parent module: %r\" % e)\n\n return deprecated_cls\n\n\ndef _clspath(cls, forced=None):\n if forced is not None:\n return forced\n return '{}.{}'.format(cls.__module__, cls.__name__)\n\n\nDEPRECATION_RULES = [\n ('scrapy.contrib_exp.downloadermiddleware.decompression.', 'scrapy.downloadermiddlewares.decompression.'),\n ('scrapy.contrib_exp.iterators.', 'scrapy.utils.iterators.'),\n ('scrapy.contrib.downloadermiddleware.', 'scrapy.downloadermiddlewares.'),\n ('scrapy.contrib.exporter.', 'scrapy.exporters.'),\n ('scrapy.contrib.linkextractors.', 'scrapy.linkextractors.'),\n ('scrapy.contrib.loader.processor.', 'scrapy.loader.processors.'),\n ('scrapy.contrib.loader.', 'scrapy.loader.'),\n ('scrapy.contrib.pipeline.', 'scrapy.pipelines.'),\n ('scrapy.contrib.spidermiddleware.', 'scrapy.spidermiddlewares.'),\n ('scrapy.contrib.spiders.', 'scrapy.spiders.'),\n ('scrapy.contrib.', 'scrapy.extensions.'),\n ('scrapy.command.', 'scrapy.commands.'),\n ('scrapy.dupefilter.', 'scrapy.dupefilters.'),\n ('scrapy.linkextractor.', 'scrapy.linkextractors.'),\n ('scrapy.spider.', 'scrapy.spiders.'),\n ('scrapy.squeue.', 'scrapy.squeues.'),\n ('scrapy.statscol.', 'scrapy.statscollectors.'),\n ('scrapy.utils.decorator.', 'scrapy.utils.decorators.'),\n ('scrapy.spidermanager.SpiderManager', 'scrapy.spiderloader.SpiderLoader'),\n]\n\n\ndef update_classpath(path):\n \"\"\"Update a deprecated path from an object with its new location\"\"\"\n for prefix, replacement in DEPRECATION_RULES:\n if path.startswith(prefix):\n new_path = path.replace(prefix, replacement, 1)\n warnings.warn(\"`{}` class is deprecated, use `{}` instead\".format(path, new_path),\n ScrapyDeprecationWarning)\n return new_path\n return path\n", "path": "scrapy/utils/deprecate.py"}, {"content": "import os\nimport sys\nfrom operator import itemgetter\n\nimport six\nfrom six.moves.configparser import SafeConfigParser\n\nfrom scrapy.utils.deprecate import update_classpath\n\n\ndef build_component_list(base, custom, convert=update_classpath):\n \"\"\"Compose a component list based on a custom and base dict of components\n (typically middlewares or extensions), unless custom is already a list, in\n which case it's returned.\n \"\"\"\n\n def _check_components(complist):\n if len({convert(c) for c in complist}) != len(complist):\n raise ValueError('Some paths in {!r} convert to the same object, '\n 'please update your settings'.format(complist))\n\n if isinstance(custom, (list, tuple)):\n _check_components(custom)\n return type(custom)(convert(c) for c in custom)\n\n def _map_keys(compdict):\n _check_components(compdict)\n return {convert(k): v for k, v in six.iteritems(compdict)}\n\n compdict = _map_keys(base)\n compdict.update(_map_keys(custom))\n items = (x for x in six.iteritems(compdict) if x[1] is not None)\n return [x[0] for x in sorted(items, key=itemgetter(1))]\n\n\ndef arglist_to_dict(arglist):\n \"\"\"Convert a list of arguments like ['arg1=val1', 'arg2=val2', ...] to a\n dict\n \"\"\"\n return dict(x.split('=', 1) for x in arglist)\n\n\ndef closest_scrapy_cfg(path='.', prevpath=None):\n \"\"\"Return the path to the closest scrapy.cfg file by traversing the current\n directory and its parents\n \"\"\"\n if path == prevpath:\n return ''\n path = os.path.abspath(path)\n cfgfile = os.path.join(path, 'scrapy.cfg')\n if os.path.exists(cfgfile):\n return cfgfile\n return closest_scrapy_cfg(os.path.dirname(path), path)\n\n\ndef init_env(project='default', set_syspath=True):\n \"\"\"Initialize environment to use command-line tool from inside a project\n dir. This sets the Scrapy settings module and modifies the Python path to\n be able to locate the project module.\n \"\"\"\n cfg = get_config()\n if cfg.has_option('settings', project):\n os.environ['SCRAPY_SETTINGS_MODULE'] = cfg.get('settings', project)\n closest = closest_scrapy_cfg()\n if closest:\n projdir = os.path.dirname(closest)\n if set_syspath and projdir not in sys.path:\n sys.path.append(projdir)\n\n\ndef get_config(use_closest=True):\n \"\"\"Get Scrapy config file as a SafeConfigParser\"\"\"\n sources = get_sources(use_closest)\n cfg = SafeConfigParser()\n cfg.read(sources)\n return cfg\n\n\ndef get_sources(use_closest=True):\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME') or \\\n os.path.expanduser('~/.config')\n sources = ['/etc/scrapy.cfg', r'c:\\scrapy\\scrapy.cfg',\n xdg_config_home + '/scrapy.cfg',\n os.path.expanduser('~/.scrapy.cfg')]\n if use_closest:\n sources.append(closest_scrapy_cfg())\n return sources\n", "path": "scrapy/utils/conf.py"}]} | 2,594 | 836 |
gh_patches_debug_24923 | rasdani/github-patches | git_diff | WeblateOrg__weblate-2306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vote for suggestion when adding duplicate
Currently duplicate suggestions are ignored, but it would be better if it would be accepted as upvote in case voting is enabled for given translation.
See also https://github.com/WeblateOrg/weblate/issues/1348#issuecomment-280706768
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/42236397-vote-for-suggestion-when-adding-duplicate?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github).
</bountysource-plugin>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/trans/models/suggestion.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © 2012 - 2018 Michal Čihař <[email protected]>
4 #
5 # This file is part of Weblate <https://weblate.org/>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program. If not, see <https://www.gnu.org/licenses/>.
19 #
20
21 from __future__ import unicode_literals
22
23 from django.conf import settings
24 from django.db import models, transaction
25 from django.db.models import Count
26 from django.utils.encoding import python_2_unicode_compatible
27 from django.utils.translation import ugettext as _
28
29 from weblate.lang.models import Language
30 from weblate.trans.models.change import Change
31 from weblate.utils.unitdata import UnitData
32 from weblate.trans.mixins import UserDisplayMixin
33 from weblate.utils import messages
34 from weblate.utils.antispam import report_spam
35 from weblate.utils.fields import JSONField
36 from weblate.utils.state import STATE_TRANSLATED
37 from weblate.utils.request import get_ip_address
38
39
40 class SuggestionManager(models.Manager):
41 # pylint: disable=no-init
42
43 def add(self, unit, target, request, vote=False):
44 """Create new suggestion for this unit."""
45 user = request.user
46
47 same = self.filter(
48 target=target,
49 content_hash=unit.content_hash,
50 language=unit.translation.language,
51 project=unit.translation.component.project,
52 )
53
54 if same.exists() or (unit.target == target and not unit.fuzzy):
55 return False
56
57 # Create the suggestion
58 suggestion = self.create(
59 target=target,
60 content_hash=unit.content_hash,
61 language=unit.translation.language,
62 project=unit.translation.component.project,
63 user=user,
64 userdetails={
65 'address': get_ip_address(request),
66 'agent': request.META.get('HTTP_USER_AGENT', ''),
67 },
68 )
69
70 # Record in change
71 for aunit in suggestion.related_units:
72 Change.objects.create(
73 unit=aunit,
74 action=Change.ACTION_SUGGESTION,
75 user=user,
76 target=target,
77 author=user
78 )
79
80 # Add unit vote
81 if vote:
82 suggestion.add_vote(
83 unit.translation,
84 request,
85 True
86 )
87
88 # Notify subscribed users
89 from weblate.accounts.notifications import notify_new_suggestion
90 notify_new_suggestion(unit, suggestion, user)
91
92 # Update suggestion stats
93 if user is not None:
94 user.profile.suggested += 1
95 user.profile.save()
96
97 return True
98
99 def copy(self, project):
100 """Copy suggestions to new project
101
102 This is used on moving component to other project and ensures nothing
103 is lost. We don't actually look where the suggestion belongs as it
104 would make the operation really expensive and it should be done in the
105 cleanup cron job.
106 """
107 for suggestion in self.all():
108 Suggestion.objects.create(
109 project=project,
110 target=suggestion.target,
111 content_hash=suggestion.content_hash,
112 user=suggestion.user,
113 language=suggestion.language,
114 )
115
116
117 @python_2_unicode_compatible
118 class Suggestion(UnitData, UserDisplayMixin):
119 target = models.TextField()
120 user = models.ForeignKey(
121 settings.AUTH_USER_MODEL, null=True, blank=True,
122 on_delete=models.deletion.CASCADE
123 )
124 userdetails = JSONField()
125 language = models.ForeignKey(
126 Language, on_delete=models.deletion.CASCADE
127 )
128 timestamp = models.DateTimeField(auto_now_add=True)
129
130 votes = models.ManyToManyField(
131 settings.AUTH_USER_MODEL,
132 through='Vote',
133 related_name='user_votes'
134 )
135
136 objects = SuggestionManager()
137
138 class Meta(object):
139 app_label = 'trans'
140 ordering = ['-timestamp']
141 index_together = [
142 ('project', 'language', 'content_hash'),
143 ]
144
145 def __str__(self):
146 return 'suggestion for {0} by {1}'.format(
147 self.content_hash,
148 self.user.username if self.user else 'unknown',
149 )
150
151 @transaction.atomic
152 def accept(self, translation, request, permission='suggestion.accept'):
153 allunits = translation.unit_set.select_for_update().filter(
154 content_hash=self.content_hash,
155 )
156 failure = False
157 for unit in allunits:
158 if not request.user.has_perm(permission, unit):
159 failure = True
160 messages.error(request, _('Failed to accept suggestion!'))
161 continue
162
163 # Skip if there is no change
164 if unit.target == self.target and unit.state >= STATE_TRANSLATED:
165 continue
166
167 unit.target = self.target
168 unit.state = STATE_TRANSLATED
169 unit.save_backend(
170 request, change_action=Change.ACTION_ACCEPT, user=self.user
171 )
172
173 if not failure:
174 self.delete()
175
176 def delete_log(self, user, change=Change.ACTION_SUGGESTION_DELETE,
177 is_spam=False):
178 """Delete with logging change"""
179 if is_spam and self.userdetails:
180 report_spam(
181 self.userdetails['address'],
182 self.userdetails['agent'],
183 self.target
184 )
185 for unit in self.related_units:
186 Change.objects.create(
187 unit=unit,
188 action=change,
189 user=user,
190 target=self.target,
191 author=user
192 )
193 self.delete()
194
195 def get_num_votes(self):
196 """Return number of votes."""
197 votes = Vote.objects.filter(suggestion=self)
198 positive = votes.filter(positive=True).aggregate(Count('id'))
199 negative = votes.filter(positive=False).aggregate(Count('id'))
200 return positive['id__count'] - negative['id__count']
201
202 def add_vote(self, translation, request, positive):
203 """Add (or updates) vote for a suggestion."""
204 if not request.user.is_authenticated:
205 return
206
207 vote, created = Vote.objects.get_or_create(
208 suggestion=self,
209 user=request.user,
210 defaults={'positive': positive}
211 )
212 if not created or vote.positive != positive:
213 vote.positive = positive
214 vote.save()
215
216 # Automatic accepting
217 required_votes = translation.component.suggestion_autoaccept
218 if required_votes and self.get_num_votes() >= required_votes:
219 self.accept(translation, request, 'suggestion.vote')
220
221
222 @python_2_unicode_compatible
223 class Vote(models.Model):
224 """Suggestion voting."""
225 suggestion = models.ForeignKey(
226 Suggestion, on_delete=models.deletion.CASCADE
227 )
228 user = models.ForeignKey(
229 settings.AUTH_USER_MODEL, on_delete=models.deletion.CASCADE
230 )
231 positive = models.BooleanField(default=True)
232
233 class Meta(object):
234 unique_together = ('suggestion', 'user')
235 app_label = 'trans'
236
237 def __str__(self):
238 if self.positive:
239 vote = '+1'
240 else:
241 vote = '-1'
242 return '{0} for {1} by {2}'.format(
243 vote,
244 self.suggestion,
245 self.user.username,
246 )
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/weblate/trans/models/suggestion.py b/weblate/trans/models/suggestion.py
--- a/weblate/trans/models/suggestion.py
+++ b/weblate/trans/models/suggestion.py
@@ -35,6 +35,7 @@
from weblate.utils.fields import JSONField
from weblate.utils.state import STATE_TRANSLATED
from weblate.utils.request import get_ip_address
+from django.core.exceptions import ObjectDoesNotExist
class SuggestionManager(models.Manager):
@@ -44,15 +45,22 @@
"""Create new suggestion for this unit."""
user = request.user
- same = self.filter(
- target=target,
- content_hash=unit.content_hash,
- language=unit.translation.language,
- project=unit.translation.component.project,
- )
+ try:
+ same = self.get(
+ target=target,
+ content_hash=unit.content_hash,
+ language=unit.translation.language,
+ project=unit.translation.component.project,
+ )
+
+ if same.user == user or not vote:
+ return False
+ else:
+ same.add_vote(unit.translation, request, True)
+ return False
- if same.exists() or (unit.target == target and not unit.fuzzy):
- return False
+ except ObjectDoesNotExist:
+ pass
# Create the suggestion
suggestion = self.create(
| {"golden_diff": "diff --git a/weblate/trans/models/suggestion.py b/weblate/trans/models/suggestion.py\n--- a/weblate/trans/models/suggestion.py\n+++ b/weblate/trans/models/suggestion.py\n@@ -35,6 +35,7 @@\n from weblate.utils.fields import JSONField\n from weblate.utils.state import STATE_TRANSLATED\n from weblate.utils.request import get_ip_address\n+from django.core.exceptions import ObjectDoesNotExist\n \n \n class SuggestionManager(models.Manager):\n@@ -44,15 +45,22 @@\n \"\"\"Create new suggestion for this unit.\"\"\"\n user = request.user\n \n- same = self.filter(\n- target=target,\n- content_hash=unit.content_hash,\n- language=unit.translation.language,\n- project=unit.translation.component.project,\n- )\n+ try:\n+ same = self.get(\n+ target=target,\n+ content_hash=unit.content_hash,\n+ language=unit.translation.language,\n+ project=unit.translation.component.project,\n+ )\n+\n+ if same.user == user or not vote:\n+ return False\n+ else:\n+ same.add_vote(unit.translation, request, True)\n+ return False\n \n- if same.exists() or (unit.target == target and not unit.fuzzy):\n- return False\n+ except ObjectDoesNotExist:\n+ pass\n \n # Create the suggestion\n suggestion = self.create(\n", "issue": "Vote for suggestion when adding duplicate\nCurrently duplicate suggestions are ignored, but it would be better if it would be accepted as upvote in case voting is enabled for given translation.\r\n\r\nSee also https://github.com/WeblateOrg/weblate/issues/1348#issuecomment-280706768\r\n\r\n<bountysource-plugin>\r\n\r\n---\r\nWant to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/42236397-vote-for-suggestion-when-adding-duplicate?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github).\r\n</bountysource-plugin>\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 2012 - 2018 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\nfrom __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.db import models, transaction\nfrom django.db.models import Count\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.utils.translation import ugettext as _\n\nfrom weblate.lang.models import Language\nfrom weblate.trans.models.change import Change\nfrom weblate.utils.unitdata import UnitData\nfrom weblate.trans.mixins import UserDisplayMixin\nfrom weblate.utils import messages\nfrom weblate.utils.antispam import report_spam\nfrom weblate.utils.fields import JSONField\nfrom weblate.utils.state import STATE_TRANSLATED\nfrom weblate.utils.request import get_ip_address\n\n\nclass SuggestionManager(models.Manager):\n # pylint: disable=no-init\n\n def add(self, unit, target, request, vote=False):\n \"\"\"Create new suggestion for this unit.\"\"\"\n user = request.user\n\n same = self.filter(\n target=target,\n content_hash=unit.content_hash,\n language=unit.translation.language,\n project=unit.translation.component.project,\n )\n\n if same.exists() or (unit.target == target and not unit.fuzzy):\n return False\n\n # Create the suggestion\n suggestion = self.create(\n target=target,\n content_hash=unit.content_hash,\n language=unit.translation.language,\n project=unit.translation.component.project,\n user=user,\n userdetails={\n 'address': get_ip_address(request),\n 'agent': request.META.get('HTTP_USER_AGENT', ''),\n },\n )\n\n # Record in change\n for aunit in suggestion.related_units:\n Change.objects.create(\n unit=aunit,\n action=Change.ACTION_SUGGESTION,\n user=user,\n target=target,\n author=user\n )\n\n # Add unit vote\n if vote:\n suggestion.add_vote(\n unit.translation,\n request,\n True\n )\n\n # Notify subscribed users\n from weblate.accounts.notifications import notify_new_suggestion\n notify_new_suggestion(unit, suggestion, user)\n\n # Update suggestion stats\n if user is not None:\n user.profile.suggested += 1\n user.profile.save()\n\n return True\n\n def copy(self, project):\n \"\"\"Copy suggestions to new project\n\n This is used on moving component to other project and ensures nothing\n is lost. We don't actually look where the suggestion belongs as it\n would make the operation really expensive and it should be done in the\n cleanup cron job.\n \"\"\"\n for suggestion in self.all():\n Suggestion.objects.create(\n project=project,\n target=suggestion.target,\n content_hash=suggestion.content_hash,\n user=suggestion.user,\n language=suggestion.language,\n )\n\n\n@python_2_unicode_compatible\nclass Suggestion(UnitData, UserDisplayMixin):\n target = models.TextField()\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, null=True, blank=True,\n on_delete=models.deletion.CASCADE\n )\n userdetails = JSONField()\n language = models.ForeignKey(\n Language, on_delete=models.deletion.CASCADE\n )\n timestamp = models.DateTimeField(auto_now_add=True)\n\n votes = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n through='Vote',\n related_name='user_votes'\n )\n\n objects = SuggestionManager()\n\n class Meta(object):\n app_label = 'trans'\n ordering = ['-timestamp']\n index_together = [\n ('project', 'language', 'content_hash'),\n ]\n\n def __str__(self):\n return 'suggestion for {0} by {1}'.format(\n self.content_hash,\n self.user.username if self.user else 'unknown',\n )\n\n @transaction.atomic\n def accept(self, translation, request, permission='suggestion.accept'):\n allunits = translation.unit_set.select_for_update().filter(\n content_hash=self.content_hash,\n )\n failure = False\n for unit in allunits:\n if not request.user.has_perm(permission, unit):\n failure = True\n messages.error(request, _('Failed to accept suggestion!'))\n continue\n\n # Skip if there is no change\n if unit.target == self.target and unit.state >= STATE_TRANSLATED:\n continue\n\n unit.target = self.target\n unit.state = STATE_TRANSLATED\n unit.save_backend(\n request, change_action=Change.ACTION_ACCEPT, user=self.user\n )\n\n if not failure:\n self.delete()\n\n def delete_log(self, user, change=Change.ACTION_SUGGESTION_DELETE,\n is_spam=False):\n \"\"\"Delete with logging change\"\"\"\n if is_spam and self.userdetails:\n report_spam(\n self.userdetails['address'],\n self.userdetails['agent'],\n self.target\n )\n for unit in self.related_units:\n Change.objects.create(\n unit=unit,\n action=change,\n user=user,\n target=self.target,\n author=user\n )\n self.delete()\n\n def get_num_votes(self):\n \"\"\"Return number of votes.\"\"\"\n votes = Vote.objects.filter(suggestion=self)\n positive = votes.filter(positive=True).aggregate(Count('id'))\n negative = votes.filter(positive=False).aggregate(Count('id'))\n return positive['id__count'] - negative['id__count']\n\n def add_vote(self, translation, request, positive):\n \"\"\"Add (or updates) vote for a suggestion.\"\"\"\n if not request.user.is_authenticated:\n return\n\n vote, created = Vote.objects.get_or_create(\n suggestion=self,\n user=request.user,\n defaults={'positive': positive}\n )\n if not created or vote.positive != positive:\n vote.positive = positive\n vote.save()\n\n # Automatic accepting\n required_votes = translation.component.suggestion_autoaccept\n if required_votes and self.get_num_votes() >= required_votes:\n self.accept(translation, request, 'suggestion.vote')\n\n\n@python_2_unicode_compatible\nclass Vote(models.Model):\n \"\"\"Suggestion voting.\"\"\"\n suggestion = models.ForeignKey(\n Suggestion, on_delete=models.deletion.CASCADE\n )\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, on_delete=models.deletion.CASCADE\n )\n positive = models.BooleanField(default=True)\n\n class Meta(object):\n unique_together = ('suggestion', 'user')\n app_label = 'trans'\n\n def __str__(self):\n if self.positive:\n vote = '+1'\n else:\n vote = '-1'\n return '{0} for {1} by {2}'.format(\n vote,\n self.suggestion,\n self.user.username,\n )\n", "path": "weblate/trans/models/suggestion.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 2012 - 2018 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\nfrom __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.db import models, transaction\nfrom django.db.models import Count\nfrom django.utils.encoding import python_2_unicode_compatible\nfrom django.utils.translation import ugettext as _\n\nfrom weblate.lang.models import Language\nfrom weblate.trans.models.change import Change\nfrom weblate.utils.unitdata import UnitData\nfrom weblate.trans.mixins import UserDisplayMixin\nfrom weblate.utils import messages\nfrom weblate.utils.antispam import report_spam\nfrom weblate.utils.fields import JSONField\nfrom weblate.utils.state import STATE_TRANSLATED\nfrom weblate.utils.request import get_ip_address\nfrom django.core.exceptions import ObjectDoesNotExist\n\n\nclass SuggestionManager(models.Manager):\n # pylint: disable=no-init\n\n def add(self, unit, target, request, vote=False):\n \"\"\"Create new suggestion for this unit.\"\"\"\n user = request.user\n\n try:\n same = self.get(\n target=target,\n content_hash=unit.content_hash,\n language=unit.translation.language,\n project=unit.translation.component.project,\n )\n\n if same.user == user or not vote:\n return False\n else:\n same.add_vote(unit.translation, request, True)\n return False\n\n except ObjectDoesNotExist:\n pass\n\n # Create the suggestion\n suggestion = self.create(\n target=target,\n content_hash=unit.content_hash,\n language=unit.translation.language,\n project=unit.translation.component.project,\n user=user,\n userdetails={\n 'address': get_ip_address(request),\n 'agent': request.META.get('HTTP_USER_AGENT', ''),\n },\n )\n\n # Record in change\n for aunit in suggestion.related_units:\n Change.objects.create(\n unit=aunit,\n action=Change.ACTION_SUGGESTION,\n user=user,\n target=target,\n author=user\n )\n\n # Add unit vote\n if vote:\n suggestion.add_vote(\n unit.translation,\n request,\n True\n )\n\n # Notify subscribed users\n from weblate.accounts.notifications import notify_new_suggestion\n notify_new_suggestion(unit, suggestion, user)\n\n # Update suggestion stats\n if user is not None:\n user.profile.suggested += 1\n user.profile.save()\n\n return True\n\n def copy(self, project):\n \"\"\"Copy suggestions to new project\n\n This is used on moving component to other project and ensures nothing\n is lost. We don't actually look where the suggestion belongs as it\n would make the operation really expensive and it should be done in the\n cleanup cron job.\n \"\"\"\n for suggestion in self.all():\n Suggestion.objects.create(\n project=project,\n target=suggestion.target,\n content_hash=suggestion.content_hash,\n user=suggestion.user,\n language=suggestion.language,\n )\n\n\n@python_2_unicode_compatible\nclass Suggestion(UnitData, UserDisplayMixin):\n target = models.TextField()\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, null=True, blank=True,\n on_delete=models.deletion.CASCADE\n )\n userdetails = JSONField()\n language = models.ForeignKey(\n Language, on_delete=models.deletion.CASCADE\n )\n timestamp = models.DateTimeField(auto_now_add=True)\n\n votes = models.ManyToManyField(\n settings.AUTH_USER_MODEL,\n through='Vote',\n related_name='user_votes'\n )\n\n objects = SuggestionManager()\n\n class Meta(object):\n app_label = 'trans'\n ordering = ['-timestamp']\n index_together = [\n ('project', 'language', 'content_hash'),\n ]\n\n def __str__(self):\n return 'suggestion for {0} by {1}'.format(\n self.content_hash,\n self.user.username if self.user else 'unknown',\n )\n\n @transaction.atomic\n def accept(self, translation, request, permission='suggestion.accept'):\n allunits = translation.unit_set.select_for_update().filter(\n content_hash=self.content_hash,\n )\n failure = False\n for unit in allunits:\n if not request.user.has_perm(permission, unit):\n failure = True\n messages.error(request, _('Failed to accept suggestion!'))\n continue\n\n # Skip if there is no change\n if unit.target == self.target and unit.state >= STATE_TRANSLATED:\n continue\n\n unit.target = self.target\n unit.state = STATE_TRANSLATED\n unit.save_backend(\n request, change_action=Change.ACTION_ACCEPT, user=self.user\n )\n\n if not failure:\n self.delete()\n\n def delete_log(self, user, change=Change.ACTION_SUGGESTION_DELETE,\n is_spam=False):\n \"\"\"Delete with logging change\"\"\"\n if is_spam and self.userdetails:\n report_spam(\n self.userdetails['address'],\n self.userdetails['agent'],\n self.target\n )\n for unit in self.related_units:\n Change.objects.create(\n unit=unit,\n action=change,\n user=user,\n target=self.target,\n author=user\n )\n self.delete()\n\n def get_num_votes(self):\n \"\"\"Return number of votes.\"\"\"\n votes = Vote.objects.filter(suggestion=self)\n positive = votes.filter(positive=True).aggregate(Count('id'))\n negative = votes.filter(positive=False).aggregate(Count('id'))\n return positive['id__count'] - negative['id__count']\n\n def add_vote(self, translation, request, positive):\n \"\"\"Add (or updates) vote for a suggestion.\"\"\"\n if not request.user.is_authenticated:\n return\n\n vote, created = Vote.objects.get_or_create(\n suggestion=self,\n user=request.user,\n defaults={'positive': positive}\n )\n if not created or vote.positive != positive:\n vote.positive = positive\n vote.save()\n\n # Automatic accepting\n required_votes = translation.component.suggestion_autoaccept\n if required_votes and self.get_num_votes() >= required_votes:\n self.accept(translation, request, 'suggestion.vote')\n\n\n@python_2_unicode_compatible\nclass Vote(models.Model):\n \"\"\"Suggestion voting.\"\"\"\n suggestion = models.ForeignKey(\n Suggestion, on_delete=models.deletion.CASCADE\n )\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, on_delete=models.deletion.CASCADE\n )\n positive = models.BooleanField(default=True)\n\n class Meta(object):\n unique_together = ('suggestion', 'user')\n app_label = 'trans'\n\n def __str__(self):\n if self.positive:\n vote = '+1'\n else:\n vote = '-1'\n return '{0} for {1} by {2}'.format(\n vote,\n self.suggestion,\n self.user.username,\n )\n", "path": "weblate/trans/models/suggestion.py"}]} | 2,705 | 305 |
gh_patches_debug_25841 | rasdani/github-patches | git_diff | saleor__saleor-2825 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Turn Order.paymentStatus field into an enum
Currently `Order.status` is an `OrderStatus` enum but `Order.paymentStatus` is a `String`.
We should make both enums so clients can know all possible values up-front.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/order/types.py`
Content:
```
1 import graphene
2 from graphene import relay
3
4 from ...order import OrderEvents, models
5 from ..account.types import User
6 from ..core.types.common import CountableDjangoObjectType
7 from ..core.types.money import Money, TaxedMoney
8 from decimal import Decimal
9
10 OrderEventsEnum = graphene.Enum.from_enum(OrderEvents)
11
12
13 class OrderEvent(CountableDjangoObjectType):
14 date = graphene.types.datetime.DateTime(
15 description='Date when event happened at in ISO 8601 format.')
16 type = OrderEventsEnum(description='Order event type')
17 user = graphene.Field(
18 User, id=graphene.Argument(graphene.ID),
19 description='User who performed the action.')
20 message = graphene.String(
21 description='Content of a note added to the order.')
22 email = graphene.String(description='Email of the customer')
23 email_type = graphene.String(
24 description='Type of an email sent to the customer')
25 amount = graphene.Float(description='Amount of money.')
26 quantity = graphene.Int(description='Number of items.')
27 composed_id = graphene.String(
28 description='Composed id of the Fulfillment.')
29
30 class Meta:
31 description = 'History log of the order.'
32 model = models.OrderEvent
33 interfaces = [relay.Node]
34 exclude_fields = ['order', 'parameters']
35
36 def resolve_email(self, info):
37 return self.parameters.get('email', None)
38
39 def resolve_email_type(self, info):
40 return self.parameters.get('email_type', None)
41
42 def resolve_amount(self, info):
43 amount = self.parameters.get('amount', None)
44 return Decimal(amount) if amount else None
45
46 def resolve_quantity(self, info):
47 quantity = self.parameters.get('quantity', None)
48 return int(quantity) if quantity else None
49
50 def resolve_message(self, info):
51 return self.parameters.get('message', None)
52
53 def resolve_composed_id(self, info):
54 return self.parameters.get('composed_id', None)
55
56
57 class Fulfillment(CountableDjangoObjectType):
58 status_display = graphene.String(
59 description='User-friendly fulfillment status.')
60
61 class Meta:
62 description = 'Represents order fulfillment.'
63 interfaces = [relay.Node]
64 model = models.Fulfillment
65 exclude_fields = ['order']
66
67 def resolve_status_display(self, info):
68 return self.get_status_display()
69
70
71 class FulfillmentLine(CountableDjangoObjectType):
72 class Meta:
73 description = 'Represents line of the fulfillment.'
74 interfaces = [relay.Node]
75 model = models.FulfillmentLine
76 exclude_fields = ['fulfillment']
77
78
79 class Order(CountableDjangoObjectType):
80 fulfillments = graphene.List(
81 Fulfillment,
82 required=True,
83 description='List of shipments for the order.')
84 is_paid = graphene.Boolean(
85 description='Informs if an order is fully paid.')
86 number = graphene.String(description='User-friendly number of an order.')
87 payment_status = graphene.String(description='Internal payment status.')
88 payment_status_display = graphene.String(
89 description='User-friendly payment status.')
90 subtotal = graphene.Field(
91 TaxedMoney,
92 description='The sum of line prices not including shipping.')
93 status_display = graphene.String(description='User-friendly order status.')
94 total_authorized = graphene.Field(
95 Money, description='Amount authorized for the order.')
96 total_captured = graphene.Field(
97 Money, description='Amount captured by payment.')
98 events = graphene.List(
99 OrderEvent,
100 description='List of events associated with the order.')
101 user_email = graphene.String(
102 required=False, description='Email address of the customer.')
103
104 class Meta:
105 description = 'Represents an order in the shop.'
106 interfaces = [relay.Node]
107 model = models.Order
108 exclude_fields = [
109 'shipping_price_gross', 'shipping_price_net', 'total_gross',
110 'total_net']
111
112 @staticmethod
113 def resolve_subtotal(obj, info):
114 return obj.get_subtotal()
115
116 @staticmethod
117 def resolve_total_authorized(obj, info):
118 payment = obj.get_last_payment()
119 if payment:
120 return payment.get_total_price().gross
121
122 @staticmethod
123 def resolve_total_captured(obj, info):
124 payment = obj.get_last_payment()
125 if payment:
126 return payment.get_captured_price()
127
128 @staticmethod
129 def resolve_fulfillments(obj, info):
130 return obj.fulfillments.all()
131
132 @staticmethod
133 def resolve_events(obj, info):
134 return obj.events.all()
135
136 @staticmethod
137 def resolve_is_paid(obj, info):
138 return obj.is_fully_paid()
139
140 @staticmethod
141 def resolve_number(obj, info):
142 return str(obj.pk)
143
144 @staticmethod
145 def resolve_payment_status(obj, info):
146 return obj.get_last_payment_status()
147
148 @staticmethod
149 def resolve_payment_status_display(obj, info):
150 return obj.get_last_payment_status_display()
151
152 @staticmethod
153 def resolve_status_display(obj, info):
154 return obj.get_status_display()
155
156 @staticmethod
157 def resolve_user_email(obj, info):
158 if obj.user_email:
159 return obj.user_email
160 if obj.user_id:
161 return obj.user.email
162 return None
163
164
165 class OrderLine(CountableDjangoObjectType):
166 class Meta:
167 description = 'Represents order line of particular order.'
168 model = models.OrderLine
169 interfaces = [relay.Node]
170 exclude_fields = [
171 'order', 'unit_price_gross', 'unit_price_net', 'variant']
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/order/types.py b/saleor/graphql/order/types.py
--- a/saleor/graphql/order/types.py
+++ b/saleor/graphql/order/types.py
@@ -1,13 +1,18 @@
+from decimal import Decimal
+
import graphene
from graphene import relay
+from payments import PaymentStatus
from ...order import OrderEvents, models
from ..account.types import User
from ..core.types.common import CountableDjangoObjectType
from ..core.types.money import Money, TaxedMoney
-from decimal import Decimal
OrderEventsEnum = graphene.Enum.from_enum(OrderEvents)
+PaymentStatusEnum = graphene.Enum(
+ 'PaymentStatusEnum',
+ [(code.upper(), code) for code, name in PaymentStatus.CHOICES])
class OrderEvent(CountableDjangoObjectType):
@@ -84,7 +89,7 @@
is_paid = graphene.Boolean(
description='Informs if an order is fully paid.')
number = graphene.String(description='User-friendly number of an order.')
- payment_status = graphene.String(description='Internal payment status.')
+ payment_status = PaymentStatusEnum(description='Internal payment status.')
payment_status_display = graphene.String(
description='User-friendly payment status.')
subtotal = graphene.Field(
| {"golden_diff": "diff --git a/saleor/graphql/order/types.py b/saleor/graphql/order/types.py\n--- a/saleor/graphql/order/types.py\n+++ b/saleor/graphql/order/types.py\n@@ -1,13 +1,18 @@\n+from decimal import Decimal\n+\n import graphene\n from graphene import relay\n+from payments import PaymentStatus\n \n from ...order import OrderEvents, models\n from ..account.types import User\n from ..core.types.common import CountableDjangoObjectType\n from ..core.types.money import Money, TaxedMoney\n-from decimal import Decimal\n \n OrderEventsEnum = graphene.Enum.from_enum(OrderEvents)\n+PaymentStatusEnum = graphene.Enum(\n+ 'PaymentStatusEnum',\n+ [(code.upper(), code) for code, name in PaymentStatus.CHOICES])\n \n \n class OrderEvent(CountableDjangoObjectType):\n@@ -84,7 +89,7 @@\n is_paid = graphene.Boolean(\n description='Informs if an order is fully paid.')\n number = graphene.String(description='User-friendly number of an order.')\n- payment_status = graphene.String(description='Internal payment status.')\n+ payment_status = PaymentStatusEnum(description='Internal payment status.')\n payment_status_display = graphene.String(\n description='User-friendly payment status.')\n subtotal = graphene.Field(\n", "issue": "Turn Order.paymentStatus field into an enum\nCurrently `Order.status` is an `OrderStatus` enum but `Order.paymentStatus` is a `String`.\r\n\r\nWe should make both enums so clients can know all possible values up-front.\n", "before_files": [{"content": "import graphene\nfrom graphene import relay\n\nfrom ...order import OrderEvents, models\nfrom ..account.types import User\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import Money, TaxedMoney\nfrom decimal import Decimal\n\nOrderEventsEnum = graphene.Enum.from_enum(OrderEvents)\n\n\nclass OrderEvent(CountableDjangoObjectType):\n date = graphene.types.datetime.DateTime(\n description='Date when event happened at in ISO 8601 format.')\n type = OrderEventsEnum(description='Order event type')\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID),\n description='User who performed the action.')\n message = graphene.String(\n description='Content of a note added to the order.')\n email = graphene.String(description='Email of the customer')\n email_type = graphene.String(\n description='Type of an email sent to the customer')\n amount = graphene.Float(description='Amount of money.')\n quantity = graphene.Int(description='Number of items.')\n composed_id = graphene.String(\n description='Composed id of the Fulfillment.')\n\n class Meta:\n description = 'History log of the order.'\n model = models.OrderEvent\n interfaces = [relay.Node]\n exclude_fields = ['order', 'parameters']\n\n def resolve_email(self, info):\n return self.parameters.get('email', None)\n\n def resolve_email_type(self, info):\n return self.parameters.get('email_type', None)\n\n def resolve_amount(self, info):\n amount = self.parameters.get('amount', None)\n return Decimal(amount) if amount else None\n\n def resolve_quantity(self, info):\n quantity = self.parameters.get('quantity', None)\n return int(quantity) if quantity else None\n\n def resolve_message(self, info):\n return self.parameters.get('message', None)\n\n def resolve_composed_id(self, info):\n return self.parameters.get('composed_id', None)\n\n\nclass Fulfillment(CountableDjangoObjectType):\n status_display = graphene.String(\n description='User-friendly fulfillment status.')\n\n class Meta:\n description = 'Represents order fulfillment.'\n interfaces = [relay.Node]\n model = models.Fulfillment\n exclude_fields = ['order']\n\n def resolve_status_display(self, info):\n return self.get_status_display()\n\n\nclass FulfillmentLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents line of the fulfillment.'\n interfaces = [relay.Node]\n model = models.FulfillmentLine\n exclude_fields = ['fulfillment']\n\n\nclass Order(CountableDjangoObjectType):\n fulfillments = graphene.List(\n Fulfillment,\n required=True,\n description='List of shipments for the order.')\n is_paid = graphene.Boolean(\n description='Informs if an order is fully paid.')\n number = graphene.String(description='User-friendly number of an order.')\n payment_status = graphene.String(description='Internal payment status.')\n payment_status_display = graphene.String(\n description='User-friendly payment status.')\n subtotal = graphene.Field(\n TaxedMoney,\n description='The sum of line prices not including shipping.')\n status_display = graphene.String(description='User-friendly order status.')\n total_authorized = graphene.Field(\n Money, description='Amount authorized for the order.')\n total_captured = graphene.Field(\n Money, description='Amount captured by payment.')\n events = graphene.List(\n OrderEvent,\n description='List of events associated with the order.')\n user_email = graphene.String(\n required=False, description='Email address of the customer.')\n\n class Meta:\n description = 'Represents an order in the shop.'\n interfaces = [relay.Node]\n model = models.Order\n exclude_fields = [\n 'shipping_price_gross', 'shipping_price_net', 'total_gross',\n 'total_net']\n\n @staticmethod\n def resolve_subtotal(obj, info):\n return obj.get_subtotal()\n\n @staticmethod\n def resolve_total_authorized(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_total_price().gross\n\n @staticmethod\n def resolve_total_captured(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_captured_price()\n\n @staticmethod\n def resolve_fulfillments(obj, info):\n return obj.fulfillments.all()\n\n @staticmethod\n def resolve_events(obj, info):\n return obj.events.all()\n\n @staticmethod\n def resolve_is_paid(obj, info):\n return obj.is_fully_paid()\n\n @staticmethod\n def resolve_number(obj, info):\n return str(obj.pk)\n\n @staticmethod\n def resolve_payment_status(obj, info):\n return obj.get_last_payment_status()\n\n @staticmethod\n def resolve_payment_status_display(obj, info):\n return obj.get_last_payment_status_display()\n\n @staticmethod\n def resolve_status_display(obj, info):\n return obj.get_status_display()\n\n @staticmethod\n def resolve_user_email(obj, info):\n if obj.user_email:\n return obj.user_email\n if obj.user_id:\n return obj.user.email\n return None\n\n\nclass OrderLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents order line of particular order.'\n model = models.OrderLine\n interfaces = [relay.Node]\n exclude_fields = [\n 'order', 'unit_price_gross', 'unit_price_net', 'variant']\n", "path": "saleor/graphql/order/types.py"}], "after_files": [{"content": "from decimal import Decimal\n\nimport graphene\nfrom graphene import relay\nfrom payments import PaymentStatus\n\nfrom ...order import OrderEvents, models\nfrom ..account.types import User\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import Money, TaxedMoney\n\nOrderEventsEnum = graphene.Enum.from_enum(OrderEvents)\nPaymentStatusEnum = graphene.Enum(\n 'PaymentStatusEnum',\n [(code.upper(), code) for code, name in PaymentStatus.CHOICES])\n\n\nclass OrderEvent(CountableDjangoObjectType):\n date = graphene.types.datetime.DateTime(\n description='Date when event happened at in ISO 8601 format.')\n type = OrderEventsEnum(description='Order event type')\n user = graphene.Field(\n User, id=graphene.Argument(graphene.ID),\n description='User who performed the action.')\n message = graphene.String(\n description='Content of a note added to the order.')\n email = graphene.String(description='Email of the customer')\n email_type = graphene.String(\n description='Type of an email sent to the customer')\n amount = graphene.Float(description='Amount of money.')\n quantity = graphene.Int(description='Number of items.')\n composed_id = graphene.String(\n description='Composed id of the Fulfillment.')\n\n class Meta:\n description = 'History log of the order.'\n model = models.OrderEvent\n interfaces = [relay.Node]\n exclude_fields = ['order', 'parameters']\n\n def resolve_email(self, info):\n return self.parameters.get('email', None)\n\n def resolve_email_type(self, info):\n return self.parameters.get('email_type', None)\n\n def resolve_amount(self, info):\n amount = self.parameters.get('amount', None)\n return Decimal(amount) if amount else None\n\n def resolve_quantity(self, info):\n quantity = self.parameters.get('quantity', None)\n return int(quantity) if quantity else None\n\n def resolve_message(self, info):\n return self.parameters.get('message', None)\n\n def resolve_composed_id(self, info):\n return self.parameters.get('composed_id', None)\n\n\nclass Fulfillment(CountableDjangoObjectType):\n status_display = graphene.String(\n description='User-friendly fulfillment status.')\n\n class Meta:\n description = 'Represents order fulfillment.'\n interfaces = [relay.Node]\n model = models.Fulfillment\n exclude_fields = ['order']\n\n def resolve_status_display(self, info):\n return self.get_status_display()\n\n\nclass FulfillmentLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents line of the fulfillment.'\n interfaces = [relay.Node]\n model = models.FulfillmentLine\n exclude_fields = ['fulfillment']\n\n\nclass Order(CountableDjangoObjectType):\n fulfillments = graphene.List(\n Fulfillment,\n required=True,\n description='List of shipments for the order.')\n is_paid = graphene.Boolean(\n description='Informs if an order is fully paid.')\n number = graphene.String(description='User-friendly number of an order.')\n payment_status = PaymentStatusEnum(description='Internal payment status.')\n payment_status_display = graphene.String(\n description='User-friendly payment status.')\n subtotal = graphene.Field(\n TaxedMoney,\n description='The sum of line prices not including shipping.')\n status_display = graphene.String(description='User-friendly order status.')\n total_authorized = graphene.Field(\n Money, description='Amount authorized for the order.')\n total_captured = graphene.Field(\n Money, description='Amount captured by payment.')\n events = graphene.List(\n OrderEvent,\n description='List of events associated with the order.')\n user_email = graphene.String(\n required=False, description='Email address of the customer.')\n\n class Meta:\n description = 'Represents an order in the shop.'\n interfaces = [relay.Node]\n model = models.Order\n exclude_fields = [\n 'shipping_price_gross', 'shipping_price_net', 'total_gross',\n 'total_net']\n\n @staticmethod\n def resolve_subtotal(obj, info):\n return obj.get_subtotal()\n\n @staticmethod\n def resolve_total_authorized(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_total_price().gross\n\n @staticmethod\n def resolve_total_captured(obj, info):\n payment = obj.get_last_payment()\n if payment:\n return payment.get_captured_price()\n\n @staticmethod\n def resolve_fulfillments(obj, info):\n return obj.fulfillments.all()\n\n @staticmethod\n def resolve_events(obj, info):\n return obj.events.all()\n\n @staticmethod\n def resolve_is_paid(obj, info):\n return obj.is_fully_paid()\n\n @staticmethod\n def resolve_number(obj, info):\n return str(obj.pk)\n\n @staticmethod\n def resolve_payment_status(obj, info):\n return obj.get_last_payment_status()\n\n @staticmethod\n def resolve_payment_status_display(obj, info):\n return obj.get_last_payment_status_display()\n\n @staticmethod\n def resolve_status_display(obj, info):\n return obj.get_status_display()\n\n @staticmethod\n def resolve_user_email(obj, info):\n if obj.user_email:\n return obj.user_email\n if obj.user_id:\n return obj.user.email\n return None\n\n\nclass OrderLine(CountableDjangoObjectType):\n class Meta:\n description = 'Represents order line of particular order.'\n model = models.OrderLine\n interfaces = [relay.Node]\n exclude_fields = [\n 'order', 'unit_price_gross', 'unit_price_net', 'variant']\n", "path": "saleor/graphql/order/types.py"}]} | 1,880 | 269 |
gh_patches_debug_21966 | rasdani/github-patches | git_diff | ranaroussi__yfinance-1283 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ticket data from different timezone are not aligned (bug introduced with #1085)
I am trying to find correlations between tickets in differnet timezone (one in the New York Stock exchange and the other in the London Stock Exchange). Due to changes to the timezone logic, the data in each row of `yfinance.download` are no longer the data of the Tickets at the same time. Using `ignore_tz=False` fixes this problem. This problem didn't exist with version `0.1.77` and previous. So I think by default `ignore_tz` should be set to `False` as that behaviour is consistent with the previous minor versions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yfinance/multi.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # yfinance - market data downloader
5 # https://github.com/ranaroussi/yfinance
6 #
7 # Copyright 2017-2019 Ran Aroussi
8 #
9 # Licensed under the Apache License, Version 2.0 (the "License");
10 # you may not use this file except in compliance with the License.
11 # You may obtain a copy of the License at
12 #
13 # http://www.apache.org/licenses/LICENSE-2.0
14 #
15 # Unless required by applicable law or agreed to in writing, software
16 # distributed under the License is distributed on an "AS IS" BASIS,
17 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18 # See the License for the specific language governing permissions and
19 # limitations under the License.
20 #
21
22 from __future__ import print_function
23
24 import time as _time
25 import multitasking as _multitasking
26 import pandas as _pd
27
28 from . import Ticker, utils
29 from . import shared
30
31
32 def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True,
33 group_by='column', auto_adjust=False, back_adjust=False, repair=False, keepna=False,
34 progress=True, period="max", show_errors=True, interval="1d", prepost=False,
35 proxy=None, rounding=False, timeout=10):
36 """Download yahoo tickers
37 :Parameters:
38 tickers : str, list
39 List of tickers to download
40 period : str
41 Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max
42 Either Use period parameter or use start and end
43 interval : str
44 Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
45 Intraday data cannot extend last 60 days
46 start: str
47 Download start date string (YYYY-MM-DD) or _datetime.
48 Default is 1900-01-01
49 end: str
50 Download end date string (YYYY-MM-DD) or _datetime.
51 Default is now
52 group_by : str
53 Group by 'ticker' or 'column' (default)
54 prepost : bool
55 Include Pre and Post market data in results?
56 Default is False
57 auto_adjust: bool
58 Adjust all OHLC automatically? Default is False
59 repair: bool
60 Detect currency unit 100x mixups and attempt repair
61 Default is False
62 keepna: bool
63 Keep NaN rows returned by Yahoo?
64 Default is False
65 actions: bool
66 Download dividend + stock splits data. Default is False
67 threads: bool / int
68 How many threads to use for mass downloading. Default is True
69 ignore_tz: bool
70 When combining from different timezones, ignore that part of datetime.
71 Default is True
72 proxy: str
73 Optional. Proxy server URL scheme. Default is None
74 rounding: bool
75 Optional. Round values to 2 decimal places?
76 show_errors: bool
77 Optional. Doesn't print errors if False
78 timeout: None or float
79 If not None stops waiting for a response after given number of
80 seconds. (Can also be a fraction of a second e.g. 0.01)
81 """
82
83 # create ticker list
84 tickers = tickers if isinstance(
85 tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()
86
87 # accept isin as ticker
88 shared._ISINS = {}
89 _tickers_ = []
90 for ticker in tickers:
91 if utils.is_isin(ticker):
92 isin = ticker
93 ticker = utils.get_ticker_by_isin(ticker, proxy)
94 shared._ISINS[ticker] = isin
95 _tickers_.append(ticker)
96
97 tickers = _tickers_
98
99 tickers = list(set([ticker.upper() for ticker in tickers]))
100
101 if progress:
102 shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')
103
104 # reset shared._DFS
105 shared._DFS = {}
106 shared._ERRORS = {}
107
108 # download using threads
109 if threads:
110 if threads is True:
111 threads = min([len(tickers), _multitasking.cpu_count() * 2])
112 _multitasking.set_max_threads(threads)
113 for i, ticker in enumerate(tickers):
114 _download_one_threaded(ticker, period=period, interval=interval,
115 start=start, end=end, prepost=prepost,
116 actions=actions, auto_adjust=auto_adjust,
117 back_adjust=back_adjust, repair=repair, keepna=keepna,
118 progress=(progress and i > 0), proxy=proxy,
119 rounding=rounding, timeout=timeout)
120 while len(shared._DFS) < len(tickers):
121 _time.sleep(0.01)
122
123 # download synchronously
124 else:
125 for i, ticker in enumerate(tickers):
126 data = _download_one(ticker, period=period, interval=interval,
127 start=start, end=end, prepost=prepost,
128 actions=actions, auto_adjust=auto_adjust,
129 back_adjust=back_adjust, repair=repair, keepna=keepna,
130 proxy=proxy,
131 rounding=rounding, timeout=timeout)
132 shared._DFS[ticker.upper()] = data
133 if progress:
134 shared._PROGRESS_BAR.animate()
135
136 if progress:
137 shared._PROGRESS_BAR.completed()
138
139 if shared._ERRORS and show_errors:
140 print('\n%.f Failed download%s:' % (
141 len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))
142 # print(shared._ERRORS)
143 print("\n".join(['- %s: %s' %
144 v for v in list(shared._ERRORS.items())]))
145
146 if ignore_tz:
147 for tkr in shared._DFS.keys():
148 if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0] > 0):
149 shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)
150
151 if len(tickers) == 1:
152 ticker = tickers[0]
153 return shared._DFS[shared._ISINS.get(ticker, ticker)]
154
155 try:
156 data = _pd.concat(shared._DFS.values(), axis=1, sort=True,
157 keys=shared._DFS.keys())
158 except Exception:
159 _realign_dfs()
160 data = _pd.concat(shared._DFS.values(), axis=1, sort=True,
161 keys=shared._DFS.keys())
162
163 # switch names back to isins if applicable
164 data.rename(columns=shared._ISINS, inplace=True)
165
166 if group_by == 'column':
167 data.columns = data.columns.swaplevel(0, 1)
168 data.sort_index(level=0, axis=1, inplace=True)
169
170 return data
171
172
173 def _realign_dfs():
174 idx_len = 0
175 idx = None
176
177 for df in shared._DFS.values():
178 if len(df) > idx_len:
179 idx_len = len(df)
180 idx = df.index
181
182 for key in shared._DFS.keys():
183 try:
184 shared._DFS[key] = _pd.DataFrame(
185 index=idx, data=shared._DFS[key]).drop_duplicates()
186 except Exception:
187 shared._DFS[key] = _pd.concat([
188 utils.empty_df(idx), shared._DFS[key].dropna()
189 ], axis=0, sort=True)
190
191 # remove duplicate index
192 shared._DFS[key] = shared._DFS[key].loc[
193 ~shared._DFS[key].index.duplicated(keep='last')]
194
195
196 @_multitasking.task
197 def _download_one_threaded(ticker, start=None, end=None,
198 auto_adjust=False, back_adjust=False, repair=False,
199 actions=False, progress=True, period="max",
200 interval="1d", prepost=False, proxy=None,
201 keepna=False, rounding=False, timeout=10):
202 try:
203 data = _download_one(ticker, start, end, auto_adjust, back_adjust, repair,
204 actions, period, interval, prepost, proxy, rounding,
205 keepna, timeout)
206 except Exception as e:
207 # glob try/except needed as current thead implementation breaks if exception is raised.
208 shared._DFS[ticker] = utils.empty_df()
209 shared._ERRORS[ticker] = repr(e)
210 else:
211 shared._DFS[ticker.upper()] = data
212 if progress:
213 shared._PROGRESS_BAR.animate()
214
215
216 def _download_one(ticker, start=None, end=None,
217 auto_adjust=False, back_adjust=False, repair=False,
218 actions=False, period="max", interval="1d",
219 prepost=False, proxy=None, rounding=False,
220 keepna=False, timeout=10):
221 return Ticker(ticker).history(
222 period=period, interval=interval,
223 start=start, end=end, prepost=prepost,
224 actions=actions, auto_adjust=auto_adjust,
225 back_adjust=back_adjust, repair=repair, proxy=proxy,
226 rounding=rounding, keepna=keepna, timeout=timeout,
227 debug=False, raise_errors=False # debug and raise_errors false to not log and raise errors in threads
228 )
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/yfinance/multi.py b/yfinance/multi.py
--- a/yfinance/multi.py
+++ b/yfinance/multi.py
@@ -29,7 +29,7 @@
from . import shared
-def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True,
+def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=False,
group_by='column', auto_adjust=False, back_adjust=False, repair=False, keepna=False,
progress=True, period="max", show_errors=True, interval="1d", prepost=False,
proxy=None, rounding=False, timeout=10):
@@ -68,7 +68,7 @@
How many threads to use for mass downloading. Default is True
ignore_tz: bool
When combining from different timezones, ignore that part of datetime.
- Default is True
+ Default is False
proxy: str
Optional. Proxy server URL scheme. Default is None
rounding: bool
| {"golden_diff": "diff --git a/yfinance/multi.py b/yfinance/multi.py\n--- a/yfinance/multi.py\n+++ b/yfinance/multi.py\n@@ -29,7 +29,7 @@\n from . import shared\n \n \n-def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True,\n+def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=False,\n group_by='column', auto_adjust=False, back_adjust=False, repair=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=10):\n@@ -68,7 +68,7 @@\n How many threads to use for mass downloading. Default is True\n ignore_tz: bool\n When combining from different timezones, ignore that part of datetime.\n- Default is True\n+ Default is False\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n", "issue": "Ticket data from different timezone are not aligned (bug introduced with #1085)\nI am trying to find correlations between tickets in differnet timezone (one in the New York Stock exchange and the other in the London Stock Exchange). Due to changes to the timezone logic, the data in each row of `yfinance.download` are no longer the data of the Tickets at the same time. Using `ignore_tz=False` fixes this problem. This problem didn't exist with version `0.1.77` and previous. So I think by default `ignore_tz` should be set to `False` as that behaviour is consistent with the previous minor versions. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n#\n# Copyright 2017-2019 Ran Aroussi\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom __future__ import print_function\n\nimport time as _time\nimport multitasking as _multitasking\nimport pandas as _pd\n\nfrom . import Ticker, utils\nfrom . import shared\n\n\ndef download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True,\n group_by='column', auto_adjust=False, back_adjust=False, repair=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=10):\n \"\"\"Download yahoo tickers\n :Parameters:\n tickers : str, list\n List of tickers to download\n period : str\n Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max\n Either Use period parameter or use start and end\n interval : str\n Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo\n Intraday data cannot extend last 60 days\n start: str\n Download start date string (YYYY-MM-DD) or _datetime.\n Default is 1900-01-01\n end: str\n Download end date string (YYYY-MM-DD) or _datetime.\n Default is now\n group_by : str\n Group by 'ticker' or 'column' (default)\n prepost : bool\n Include Pre and Post market data in results?\n Default is False\n auto_adjust: bool\n Adjust all OHLC automatically? Default is False\n repair: bool\n Detect currency unit 100x mixups and attempt repair\n Default is False\n keepna: bool\n Keep NaN rows returned by Yahoo?\n Default is False\n actions: bool\n Download dividend + stock splits data. Default is False\n threads: bool / int\n How many threads to use for mass downloading. Default is True\n ignore_tz: bool\n When combining from different timezones, ignore that part of datetime.\n Default is True\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n Optional. Round values to 2 decimal places?\n show_errors: bool\n Optional. Doesn't print errors if False\n timeout: None or float\n If not None stops waiting for a response after given number of\n seconds. (Can also be a fraction of a second e.g. 0.01)\n \"\"\"\n\n # create ticker list\n tickers = tickers if isinstance(\n tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()\n\n # accept isin as ticker\n shared._ISINS = {}\n _tickers_ = []\n for ticker in tickers:\n if utils.is_isin(ticker):\n isin = ticker\n ticker = utils.get_ticker_by_isin(ticker, proxy)\n shared._ISINS[ticker] = isin\n _tickers_.append(ticker)\n\n tickers = _tickers_\n\n tickers = list(set([ticker.upper() for ticker in tickers]))\n\n if progress:\n shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')\n\n # reset shared._DFS\n shared._DFS = {}\n shared._ERRORS = {}\n\n # download using threads\n if threads:\n if threads is True:\n threads = min([len(tickers), _multitasking.cpu_count() * 2])\n _multitasking.set_max_threads(threads)\n for i, ticker in enumerate(tickers):\n _download_one_threaded(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, keepna=keepna,\n progress=(progress and i > 0), proxy=proxy,\n rounding=rounding, timeout=timeout)\n while len(shared._DFS) < len(tickers):\n _time.sleep(0.01)\n\n # download synchronously\n else:\n for i, ticker in enumerate(tickers):\n data = _download_one(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, keepna=keepna,\n proxy=proxy,\n rounding=rounding, timeout=timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n if progress:\n shared._PROGRESS_BAR.completed()\n\n if shared._ERRORS and show_errors:\n print('\\n%.f Failed download%s:' % (\n len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))\n # print(shared._ERRORS)\n print(\"\\n\".join(['- %s: %s' %\n v for v in list(shared._ERRORS.items())]))\n\n if ignore_tz:\n for tkr in shared._DFS.keys():\n if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0] > 0):\n shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)\n\n if len(tickers) == 1:\n ticker = tickers[0]\n return shared._DFS[shared._ISINS.get(ticker, ticker)]\n\n try:\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n except Exception:\n _realign_dfs()\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n\n # switch names back to isins if applicable\n data.rename(columns=shared._ISINS, inplace=True)\n\n if group_by == 'column':\n data.columns = data.columns.swaplevel(0, 1)\n data.sort_index(level=0, axis=1, inplace=True)\n\n return data\n\n\ndef _realign_dfs():\n idx_len = 0\n idx = None\n\n for df in shared._DFS.values():\n if len(df) > idx_len:\n idx_len = len(df)\n idx = df.index\n\n for key in shared._DFS.keys():\n try:\n shared._DFS[key] = _pd.DataFrame(\n index=idx, data=shared._DFS[key]).drop_duplicates()\n except Exception:\n shared._DFS[key] = _pd.concat([\n utils.empty_df(idx), shared._DFS[key].dropna()\n ], axis=0, sort=True)\n\n # remove duplicate index\n shared._DFS[key] = shared._DFS[key].loc[\n ~shared._DFS[key].index.duplicated(keep='last')]\n\n\n@_multitasking.task\ndef _download_one_threaded(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False, repair=False,\n actions=False, progress=True, period=\"max\",\n interval=\"1d\", prepost=False, proxy=None,\n keepna=False, rounding=False, timeout=10):\n try:\n data = _download_one(ticker, start, end, auto_adjust, back_adjust, repair,\n actions, period, interval, prepost, proxy, rounding,\n keepna, timeout)\n except Exception as e:\n # glob try/except needed as current thead implementation breaks if exception is raised.\n shared._DFS[ticker] = utils.empty_df()\n shared._ERRORS[ticker] = repr(e)\n else:\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n\ndef _download_one(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False, repair=False,\n actions=False, period=\"max\", interval=\"1d\",\n prepost=False, proxy=None, rounding=False,\n keepna=False, timeout=10):\n return Ticker(ticker).history(\n period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, proxy=proxy,\n rounding=rounding, keepna=keepna, timeout=timeout,\n debug=False, raise_errors=False # debug and raise_errors false to not log and raise errors in threads\n )\n", "path": "yfinance/multi.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n#\n# Copyright 2017-2019 Ran Aroussi\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom __future__ import print_function\n\nimport time as _time\nimport multitasking as _multitasking\nimport pandas as _pd\n\nfrom . import Ticker, utils\nfrom . import shared\n\n\ndef download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=False,\n group_by='column', auto_adjust=False, back_adjust=False, repair=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=10):\n \"\"\"Download yahoo tickers\n :Parameters:\n tickers : str, list\n List of tickers to download\n period : str\n Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max\n Either Use period parameter or use start and end\n interval : str\n Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo\n Intraday data cannot extend last 60 days\n start: str\n Download start date string (YYYY-MM-DD) or _datetime.\n Default is 1900-01-01\n end: str\n Download end date string (YYYY-MM-DD) or _datetime.\n Default is now\n group_by : str\n Group by 'ticker' or 'column' (default)\n prepost : bool\n Include Pre and Post market data in results?\n Default is False\n auto_adjust: bool\n Adjust all OHLC automatically? Default is False\n repair: bool\n Detect currency unit 100x mixups and attempt repair\n Default is False\n keepna: bool\n Keep NaN rows returned by Yahoo?\n Default is False\n actions: bool\n Download dividend + stock splits data. Default is False\n threads: bool / int\n How many threads to use for mass downloading. Default is True\n ignore_tz: bool\n When combining from different timezones, ignore that part of datetime.\n Default is False\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n Optional. Round values to 2 decimal places?\n show_errors: bool\n Optional. Doesn't print errors if False\n timeout: None or float\n If not None stops waiting for a response after given number of\n seconds. (Can also be a fraction of a second e.g. 0.01)\n \"\"\"\n\n # create ticker list\n tickers = tickers if isinstance(\n tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()\n\n # accept isin as ticker\n shared._ISINS = {}\n _tickers_ = []\n for ticker in tickers:\n if utils.is_isin(ticker):\n isin = ticker\n ticker = utils.get_ticker_by_isin(ticker, proxy)\n shared._ISINS[ticker] = isin\n _tickers_.append(ticker)\n\n tickers = _tickers_\n\n tickers = list(set([ticker.upper() for ticker in tickers]))\n\n if progress:\n shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')\n\n # reset shared._DFS\n shared._DFS = {}\n shared._ERRORS = {}\n\n # download using threads\n if threads:\n if threads is True:\n threads = min([len(tickers), _multitasking.cpu_count() * 2])\n _multitasking.set_max_threads(threads)\n for i, ticker in enumerate(tickers):\n _download_one_threaded(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, keepna=keepna,\n progress=(progress and i > 0), proxy=proxy,\n rounding=rounding, timeout=timeout)\n while len(shared._DFS) < len(tickers):\n _time.sleep(0.01)\n\n # download synchronously\n else:\n for i, ticker in enumerate(tickers):\n data = _download_one(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, keepna=keepna,\n proxy=proxy,\n rounding=rounding, timeout=timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n if progress:\n shared._PROGRESS_BAR.completed()\n\n if shared._ERRORS and show_errors:\n print('\\n%.f Failed download%s:' % (\n len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))\n # print(shared._ERRORS)\n print(\"\\n\".join(['- %s: %s' %\n v for v in list(shared._ERRORS.items())]))\n\n if ignore_tz:\n for tkr in shared._DFS.keys():\n if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0] > 0):\n shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)\n\n if len(tickers) == 1:\n ticker = tickers[0]\n return shared._DFS[shared._ISINS.get(ticker, ticker)]\n\n try:\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n except Exception:\n _realign_dfs()\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n\n # switch names back to isins if applicable\n data.rename(columns=shared._ISINS, inplace=True)\n\n if group_by == 'column':\n data.columns = data.columns.swaplevel(0, 1)\n data.sort_index(level=0, axis=1, inplace=True)\n\n return data\n\n\ndef _realign_dfs():\n idx_len = 0\n idx = None\n\n for df in shared._DFS.values():\n if len(df) > idx_len:\n idx_len = len(df)\n idx = df.index\n\n for key in shared._DFS.keys():\n try:\n shared._DFS[key] = _pd.DataFrame(\n index=idx, data=shared._DFS[key]).drop_duplicates()\n except Exception:\n shared._DFS[key] = _pd.concat([\n utils.empty_df(idx), shared._DFS[key].dropna()\n ], axis=0, sort=True)\n\n # remove duplicate index\n shared._DFS[key] = shared._DFS[key].loc[\n ~shared._DFS[key].index.duplicated(keep='last')]\n\n\n@_multitasking.task\ndef _download_one_threaded(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False, repair=False,\n actions=False, progress=True, period=\"max\",\n interval=\"1d\", prepost=False, proxy=None,\n keepna=False, rounding=False, timeout=10):\n try:\n data = _download_one(ticker, start, end, auto_adjust, back_adjust, repair,\n actions, period, interval, prepost, proxy, rounding,\n keepna, timeout)\n except Exception as e:\n # glob try/except needed as current thead implementation breaks if exception is raised.\n shared._DFS[ticker] = utils.empty_df()\n shared._ERRORS[ticker] = repr(e)\n else:\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n\ndef _download_one(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False, repair=False,\n actions=False, period=\"max\", interval=\"1d\",\n prepost=False, proxy=None, rounding=False,\n keepna=False, timeout=10):\n return Ticker(ticker).history(\n period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, repair=repair, proxy=proxy,\n rounding=rounding, keepna=keepna, timeout=timeout,\n debug=False, raise_errors=False # debug and raise_errors false to not log and raise errors in threads\n )\n", "path": "yfinance/multi.py"}]} | 3,047 | 230 |
gh_patches_debug_4248 | rasdani/github-patches | git_diff | mindsdb__mindsdb-317 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unbound local error
**Your Environment**
anaconda
* Python version: 3.7.4
* Pip version:19.2.2
* Operating system:Windows
* Python environment used (e.g. venv, conda): conda
* Mindsdb version you tried to install:1.6.15
* Additional info if applicable:
**Describe the bug**
got Unbound local error while running this example
[https://github.com/ZoranPandovski/mindsdb-examples/tree/master/air_quality](https://github.com/ZoranPandovski/mindsdb-examples/tree/master/air_quality)
**To Reproduce**
Steps to reproduce the behavior, for example:
1. clone the repository
2. Run that example code in a jupyter notebook and you should see the error as presented in the screenshot.
**Expected behavior**
It should start the training.
**Additional context**


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/libs/data_sources/file_ds.py`
Content:
```
1 import re
2 from io import BytesIO, StringIO
3 import csv
4 import codecs
5 import json
6 import traceback
7 import codecs
8
9 import pandas
10 from pandas.io.json import json_normalize
11 import requests
12
13 from mindsdb.libs.data_types.data_source import DataSource
14 from mindsdb.libs.data_types.mindsdb_logger import log
15
16
17 class FileDS(DataSource):
18
19 def cleanRow(self, row):
20 n_row = []
21 for cell in row:
22 if str(cell) in ['', ' ', ' ', 'NaN', 'nan', 'NA']:
23 cell = None
24 n_row.append(cell)
25
26 return n_row
27
28 def _getDataIo(self, file):
29 """
30 This gets a file either url or local file and defiens what the format is as well as dialect
31 :param file: file path or url
32 :return: data_io, format, dialect
33 """
34
35 ############
36 # get file as io object
37 ############
38
39 data = BytesIO()
40
41 # get data from either url or file load in memory
42 if file[:5] == 'http:' or file[:6] == 'https:':
43 r = requests.get(file, stream=True)
44 if r.status_code == 200:
45 for chunk in r:
46 data.write(chunk)
47 data.seek(0)
48
49 # else read file from local file system
50 else:
51 try:
52 data = open(file, 'rb')
53 except Exception as e:
54 error = 'Could not load file, possible exception : {exception}'.format(exception = e)
55 log.error(error)
56 raise ValueError(error)
57
58
59 dialect = None
60
61 ############
62 # check for file type
63 ############
64
65 # try to guess if its an excel file
66 xlsx_sig = b'\x50\x4B\x05\06'
67 xlsx_sig2 = b'\x50\x4B\x03\x04'
68 xls_sig = b'\x09\x08\x10\x00\x00\x06\x05\x00'
69
70 # different whence, offset, size for different types
71 excel_meta = [ ('xls', 0, 512, 8), ('xlsx', 2, -22, 4)]
72
73 for filename, whence, offset, size in excel_meta:
74
75 try:
76 data.seek(offset, whence) # Seek to the offset.
77 bytes = data.read(size) # Capture the specified number of bytes.
78 data.seek(0)
79 codecs.getencoder('hex')(bytes)
80
81 if bytes == xls_sig:
82 return data, 'xls', dialect
83 elif bytes == xlsx_sig:
84 return data, 'xlsx', dialect
85
86 except:
87 data.seek(0)
88
89 # if not excel it can be a json file or a CSV, convert from binary to stringio
90
91 byte_str = data.read()
92 # Move it to StringIO
93 try:
94 # Handle Microsoft's BOM "special" UTF-8 encoding
95 if byte_str.startswith(codecs.BOM_UTF8):
96 data = StringIO(byte_str.decode('utf-8-sig'))
97 else:
98 data = StringIO(byte_str.decode('utf-8'))
99
100 except:
101 log.error(traceback.format_exc())
102 log.error('Could not load into string')
103
104 # see if its JSON
105 buffer = data.read(100)
106 data.seek(0)
107 text = buffer.strip()
108 # analyze first n characters
109 if len(text) > 0:
110 text = text.strip()
111 # it it looks like a json, then try to parse it
112 if text != "" and ((text[0] == "{") or (text[0] == "[")):
113 try:
114 json.loads(data.read())
115 data.seek(0)
116 return data, 'json', dialect
117 except:
118 data.seek(0)
119 return data, None, dialect
120
121 # lets try to figure out if its a csv
122 try:
123 data.seek(0)
124 first_few_lines = []
125 i = 0
126 for line in data:
127 i += 1
128 first_few_lines.append(line)
129 if i > 0:
130 break
131
132 accepted_delimiters = [',','\t', ';']
133 dialect = csv.Sniffer().sniff(''.join(first_few_lines[0]), delimiters=accepted_delimiters)
134 data.seek(0)
135 # if csv dialect identified then return csv
136 if dialect:
137 return data, 'csv', dialect
138 else:
139 return data, None, dialect
140 except:
141 data.seek(0)
142 log.error('Could not detect format for this file')
143 log.error(traceback.format_exc())
144 # No file type identified
145 return data, None, dialect
146
147
148
149
150 def _setup(self,file, clean_rows = True, custom_parser = None):
151 """
152 Setup from file
153 :param file: fielpath or url
154 :param clean_rows: if you want to clean rows for strange null values
155 :param custom_parser: if you want to parse the file with some custom parser
156 """
157
158 col_map = {}
159 # get file data io, format and dialect
160 data, format, dialect = self._getDataIo(file)
161 data.seek(0) # make sure we are at 0 in file pointer
162
163 if format is None:
164 log.error('Could not laod file into any format, supported formats are csv, json, xls, xslx')
165
166 if custom_parser:
167 header, file_data = custom_parser(data, format)
168
169 elif format == 'csv':
170 csv_reader = list(csv.reader(data, dialect))
171 header = csv_reader[0]
172 file_data = csv_reader[1:]
173
174 elif format in ['xlsx', 'xls']:
175 data.seek(0)
176 df = pandas.read_excel(data)
177 header = df.columns.values.tolist()
178 file_data = df.values.tolist()
179
180 elif format == 'json':
181 data.seek(0)
182 json_doc = json.loads(data.read())
183 df = json_normalize(json_doc)
184 header = df.columns.values.tolist()
185 file_data = df.values.tolist()
186
187 for col in header:
188 col_map[col] = col
189
190 if clean_rows == True:
191 file_list_data = []
192 for row in file_data:
193 row = self.cleanRow(row)
194 file_list_data.append(row)
195 else:
196 file_list_data = file_data
197
198 try:
199 return pandas.DataFrame(file_list_data, columns=header), col_map
200 except:
201 return pandas.read_csv(file), col_map
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mindsdb/libs/data_sources/file_ds.py b/mindsdb/libs/data_sources/file_ds.py
--- a/mindsdb/libs/data_sources/file_ds.py
+++ b/mindsdb/libs/data_sources/file_ds.py
@@ -124,8 +124,10 @@
first_few_lines = []
i = 0
for line in data:
- i += 1
+ if line in ['\r\n','\n']:
+ continue
first_few_lines.append(line)
+ i += 1
if i > 0:
break
| {"golden_diff": "diff --git a/mindsdb/libs/data_sources/file_ds.py b/mindsdb/libs/data_sources/file_ds.py\n--- a/mindsdb/libs/data_sources/file_ds.py\n+++ b/mindsdb/libs/data_sources/file_ds.py\n@@ -124,8 +124,10 @@\n first_few_lines = []\n i = 0\n for line in data:\n- i += 1\n+ if line in ['\\r\\n','\\n']:\n+ continue\n first_few_lines.append(line)\n+ i += 1\n if i > 0:\n break\n", "issue": "Unbound local error\n**Your Environment**\r\nanaconda \r\n* Python version: 3.7.4\r\n* Pip version:19.2.2\r\n* Operating system:Windows\r\n* Python environment used (e.g. venv, conda): conda\r\n* Mindsdb version you tried to install:1.6.15\r\n* Additional info if applicable:\r\n\r\n**Describe the bug**\r\ngot Unbound local error while running this example \r\n[https://github.com/ZoranPandovski/mindsdb-examples/tree/master/air_quality](https://github.com/ZoranPandovski/mindsdb-examples/tree/master/air_quality)\r\n**To Reproduce**\r\nSteps to reproduce the behavior, for example:\r\n1. clone the repository\r\n2. Run that example code in a jupyter notebook and you should see the error as presented in the screenshot.\r\n\r\n**Expected behavior**\r\nIt should start the training.\r\n\r\n**Additional context**\r\n\r\n\n", "before_files": [{"content": "import re\nfrom io import BytesIO, StringIO\nimport csv\nimport codecs\nimport json\nimport traceback\nimport codecs\n\nimport pandas\nfrom pandas.io.json import json_normalize\nimport requests\n\nfrom mindsdb.libs.data_types.data_source import DataSource\nfrom mindsdb.libs.data_types.mindsdb_logger import log\n\n\nclass FileDS(DataSource):\n\n def cleanRow(self, row):\n n_row = []\n for cell in row:\n if str(cell) in ['', ' ', ' ', 'NaN', 'nan', 'NA']:\n cell = None\n n_row.append(cell)\n\n return n_row\n\n def _getDataIo(self, file):\n \"\"\"\n This gets a file either url or local file and defiens what the format is as well as dialect\n :param file: file path or url\n :return: data_io, format, dialect\n \"\"\"\n\n ############\n # get file as io object\n ############\n\n data = BytesIO()\n\n # get data from either url or file load in memory\n if file[:5] == 'http:' or file[:6] == 'https:':\n r = requests.get(file, stream=True)\n if r.status_code == 200:\n for chunk in r:\n data.write(chunk)\n data.seek(0)\n\n # else read file from local file system\n else:\n try:\n data = open(file, 'rb')\n except Exception as e:\n error = 'Could not load file, possible exception : {exception}'.format(exception = e)\n log.error(error)\n raise ValueError(error)\n\n\n dialect = None\n\n ############\n # check for file type\n ############\n\n # try to guess if its an excel file\n xlsx_sig = b'\\x50\\x4B\\x05\\06'\n xlsx_sig2 = b'\\x50\\x4B\\x03\\x04'\n xls_sig = b'\\x09\\x08\\x10\\x00\\x00\\x06\\x05\\x00'\n\n # different whence, offset, size for different types\n excel_meta = [ ('xls', 0, 512, 8), ('xlsx', 2, -22, 4)]\n\n for filename, whence, offset, size in excel_meta:\n\n try:\n data.seek(offset, whence) # Seek to the offset.\n bytes = data.read(size) # Capture the specified number of bytes.\n data.seek(0)\n codecs.getencoder('hex')(bytes)\n\n if bytes == xls_sig:\n return data, 'xls', dialect\n elif bytes == xlsx_sig:\n return data, 'xlsx', dialect\n\n except:\n data.seek(0)\n\n # if not excel it can be a json file or a CSV, convert from binary to stringio\n\n byte_str = data.read()\n # Move it to StringIO\n try:\n # Handle Microsoft's BOM \"special\" UTF-8 encoding\n if byte_str.startswith(codecs.BOM_UTF8):\n data = StringIO(byte_str.decode('utf-8-sig'))\n else:\n data = StringIO(byte_str.decode('utf-8'))\n\n except:\n log.error(traceback.format_exc())\n log.error('Could not load into string')\n\n # see if its JSON\n buffer = data.read(100)\n data.seek(0)\n text = buffer.strip()\n # analyze first n characters\n if len(text) > 0:\n text = text.strip()\n # it it looks like a json, then try to parse it\n if text != \"\" and ((text[0] == \"{\") or (text[0] == \"[\")):\n try:\n json.loads(data.read())\n data.seek(0)\n return data, 'json', dialect\n except:\n data.seek(0)\n return data, None, dialect\n\n # lets try to figure out if its a csv\n try:\n data.seek(0)\n first_few_lines = []\n i = 0\n for line in data:\n i += 1\n first_few_lines.append(line)\n if i > 0:\n break\n\n accepted_delimiters = [',','\\t', ';']\n dialect = csv.Sniffer().sniff(''.join(first_few_lines[0]), delimiters=accepted_delimiters)\n data.seek(0)\n # if csv dialect identified then return csv\n if dialect:\n return data, 'csv', dialect\n else:\n return data, None, dialect\n except:\n data.seek(0)\n log.error('Could not detect format for this file')\n log.error(traceback.format_exc())\n # No file type identified\n return data, None, dialect\n\n\n\n\n def _setup(self,file, clean_rows = True, custom_parser = None):\n \"\"\"\n Setup from file\n :param file: fielpath or url\n :param clean_rows: if you want to clean rows for strange null values\n :param custom_parser: if you want to parse the file with some custom parser\n \"\"\"\n\n col_map = {}\n # get file data io, format and dialect\n data, format, dialect = self._getDataIo(file)\n data.seek(0) # make sure we are at 0 in file pointer\n\n if format is None:\n log.error('Could not laod file into any format, supported formats are csv, json, xls, xslx')\n\n if custom_parser:\n header, file_data = custom_parser(data, format)\n\n elif format == 'csv':\n csv_reader = list(csv.reader(data, dialect))\n header = csv_reader[0]\n file_data = csv_reader[1:]\n\n elif format in ['xlsx', 'xls']:\n data.seek(0)\n df = pandas.read_excel(data)\n header = df.columns.values.tolist()\n file_data = df.values.tolist()\n\n elif format == 'json':\n data.seek(0)\n json_doc = json.loads(data.read())\n df = json_normalize(json_doc)\n header = df.columns.values.tolist()\n file_data = df.values.tolist()\n\n for col in header:\n col_map[col] = col\n\n if clean_rows == True:\n file_list_data = []\n for row in file_data:\n row = self.cleanRow(row)\n file_list_data.append(row)\n else:\n file_list_data = file_data\n\n try:\n return pandas.DataFrame(file_list_data, columns=header), col_map\n except:\n return pandas.read_csv(file), col_map\n", "path": "mindsdb/libs/data_sources/file_ds.py"}], "after_files": [{"content": "import re\nfrom io import BytesIO, StringIO\nimport csv\nimport codecs\nimport json\nimport traceback\nimport codecs\n\nimport pandas\nfrom pandas.io.json import json_normalize\nimport requests\n\nfrom mindsdb.libs.data_types.data_source import DataSource\nfrom mindsdb.libs.data_types.mindsdb_logger import log\n\n\nclass FileDS(DataSource):\n\n def cleanRow(self, row):\n n_row = []\n for cell in row:\n if str(cell) in ['', ' ', ' ', 'NaN', 'nan', 'NA']:\n cell = None\n n_row.append(cell)\n\n return n_row\n\n def _getDataIo(self, file):\n \"\"\"\n This gets a file either url or local file and defiens what the format is as well as dialect\n :param file: file path or url\n :return: data_io, format, dialect\n \"\"\"\n\n ############\n # get file as io object\n ############\n\n data = BytesIO()\n\n # get data from either url or file load in memory\n if file[:5] == 'http:' or file[:6] == 'https:':\n r = requests.get(file, stream=True)\n if r.status_code == 200:\n for chunk in r:\n data.write(chunk)\n data.seek(0)\n\n # else read file from local file system\n else:\n try:\n data = open(file, 'rb')\n except Exception as e:\n error = 'Could not load file, possible exception : {exception}'.format(exception = e)\n log.error(error)\n raise ValueError(error)\n\n\n dialect = None\n\n ############\n # check for file type\n ############\n\n # try to guess if its an excel file\n xlsx_sig = b'\\x50\\x4B\\x05\\06'\n xlsx_sig2 = b'\\x50\\x4B\\x03\\x04'\n xls_sig = b'\\x09\\x08\\x10\\x00\\x00\\x06\\x05\\x00'\n\n # different whence, offset, size for different types\n excel_meta = [ ('xls', 0, 512, 8), ('xlsx', 2, -22, 4)]\n\n for filename, whence, offset, size in excel_meta:\n\n try:\n data.seek(offset, whence) # Seek to the offset.\n bytes = data.read(size) # Capture the specified number of bytes.\n data.seek(0)\n codecs.getencoder('hex')(bytes)\n\n if bytes == xls_sig:\n return data, 'xls', dialect\n elif bytes == xlsx_sig:\n return data, 'xlsx', dialect\n\n except:\n data.seek(0)\n\n # if not excel it can be a json file or a CSV, convert from binary to stringio\n\n byte_str = data.read()\n # Move it to StringIO\n try:\n # Handle Microsoft's BOM \"special\" UTF-8 encoding\n if byte_str.startswith(codecs.BOM_UTF8):\n data = StringIO(byte_str.decode('utf-8-sig'))\n else:\n data = StringIO(byte_str.decode('utf-8'))\n\n except:\n log.error(traceback.format_exc())\n log.error('Could not load into string')\n\n # see if its JSON\n buffer = data.read(100)\n data.seek(0)\n text = buffer.strip()\n # analyze first n characters\n if len(text) > 0:\n text = text.strip()\n # it it looks like a json, then try to parse it\n if text != \"\" and ((text[0] == \"{\") or (text[0] == \"[\")):\n try:\n json.loads(data.read())\n data.seek(0)\n return data, 'json', dialect\n except:\n data.seek(0)\n return data, None, dialect\n\n # lets try to figure out if its a csv\n try:\n data.seek(0)\n first_few_lines = []\n i = 0\n for line in data:\n if line in ['\\r\\n','\\n']:\n continue\n first_few_lines.append(line)\n i += 1\n if i > 0:\n break\n\n accepted_delimiters = [',','\\t', ';']\n dialect = csv.Sniffer().sniff(''.join(first_few_lines[0]), delimiters=accepted_delimiters)\n data.seek(0)\n # if csv dialect identified then return csv\n if dialect:\n return data, 'csv', dialect\n else:\n return data, None, dialect\n except:\n data.seek(0)\n log.error('Could not detect format for this file')\n log.error(traceback.format_exc())\n # No file type identified\n return data, None, dialect\n\n\n\n\n def _setup(self,file, clean_rows = True, custom_parser = None):\n \"\"\"\n Setup from file\n :param file: fielpath or url\n :param clean_rows: if you want to clean rows for strange null values\n :param custom_parser: if you want to parse the file with some custom parser\n \"\"\"\n\n col_map = {}\n # get file data io, format and dialect\n data, format, dialect = self._getDataIo(file)\n data.seek(0) # make sure we are at 0 in file pointer\n\n if format is None:\n log.error('Could not laod file into any format, supported formats are csv, json, xls, xslx')\n\n if custom_parser:\n header, file_data = custom_parser(data, format)\n\n elif format == 'csv':\n csv_reader = list(csv.reader(data, dialect))\n header = csv_reader[0]\n file_data = csv_reader[1:]\n\n elif format in ['xlsx', 'xls']:\n data.seek(0)\n df = pandas.read_excel(data)\n header = df.columns.values.tolist()\n file_data = df.values.tolist()\n\n elif format == 'json':\n data.seek(0)\n json_doc = json.loads(data.read())\n df = json_normalize(json_doc)\n header = df.columns.values.tolist()\n file_data = df.values.tolist()\n\n for col in header:\n col_map[col] = col\n\n if clean_rows == True:\n file_list_data = []\n for row in file_data:\n row = self.cleanRow(row)\n file_list_data.append(row)\n else:\n file_list_data = file_data\n\n try:\n return pandas.DataFrame(file_list_data, columns=header), col_map\n except:\n return pandas.read_csv(file), col_map\n", "path": "mindsdb/libs/data_sources/file_ds.py"}]} | 2,516 | 131 |
gh_patches_debug_1393 | rasdani/github-patches | git_diff | pytorch__audio-1583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use of deprecated `AutoNonVariableTypeMode`.
`AutoNonVariableTypeMode` is deprecated and will be removed in PyTorch 1.10.
https://github.com/pytorch/audio/search?q=AutoNonVariableTypeMode
Migration: https://github.com/pytorch/pytorch/blob/master/docs/cpp/source/notes/inference_mode.rst#migration-guide-from-autononvariabletypemode
cc @carolineechen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchaudio/__init__.py`
Content:
```
1 from . import extension # noqa: F401
2 from torchaudio._internal import module_utils as _mod_utils # noqa: F401
3 from torchaudio import (
4 compliance,
5 datasets,
6 functional,
7 kaldi_io,
8 utils,
9 sox_effects,
10 transforms,
11 )
12
13 from torchaudio.backend import (
14 list_audio_backends,
15 get_audio_backend,
16 set_audio_backend,
17 )
18
19 try:
20 from .version import __version__, git_version # noqa: F401
21 except ImportError:
22 pass
23
24 __all__ = [
25 'compliance',
26 'datasets',
27 'functional',
28 'kaldi_io',
29 'utils',
30 'sox_effects',
31 'transforms',
32 'list_audio_backends',
33 'get_audio_backend',
34 'set_audio_backend',
35 'save_encinfo',
36 'sox_signalinfo_t',
37 'sox_encodinginfo_t',
38 'get_sox_option_t',
39 'get_sox_encoding_t',
40 'get_sox_bool',
41 'SignalInfo',
42 'EncodingInfo',
43 ]
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py
--- a/torchaudio/__init__.py
+++ b/torchaudio/__init__.py
@@ -32,12 +32,4 @@
'list_audio_backends',
'get_audio_backend',
'set_audio_backend',
- 'save_encinfo',
- 'sox_signalinfo_t',
- 'sox_encodinginfo_t',
- 'get_sox_option_t',
- 'get_sox_encoding_t',
- 'get_sox_bool',
- 'SignalInfo',
- 'EncodingInfo',
]
| {"golden_diff": "diff --git a/torchaudio/__init__.py b/torchaudio/__init__.py\n--- a/torchaudio/__init__.py\n+++ b/torchaudio/__init__.py\n@@ -32,12 +32,4 @@\n 'list_audio_backends',\n 'get_audio_backend',\n 'set_audio_backend',\n- 'save_encinfo',\n- 'sox_signalinfo_t',\n- 'sox_encodinginfo_t',\n- 'get_sox_option_t',\n- 'get_sox_encoding_t',\n- 'get_sox_bool',\n- 'SignalInfo',\n- 'EncodingInfo',\n ]\n", "issue": "Use of deprecated `AutoNonVariableTypeMode`.\n`AutoNonVariableTypeMode` is deprecated and will be removed in PyTorch 1.10.\r\n\r\nhttps://github.com/pytorch/audio/search?q=AutoNonVariableTypeMode\r\n\r\nMigration: https://github.com/pytorch/pytorch/blob/master/docs/cpp/source/notes/inference_mode.rst#migration-guide-from-autononvariabletypemode\r\n\r\ncc @carolineechen \n", "before_files": [{"content": "from . import extension # noqa: F401\nfrom torchaudio._internal import module_utils as _mod_utils # noqa: F401\nfrom torchaudio import (\n compliance,\n datasets,\n functional,\n kaldi_io,\n utils,\n sox_effects,\n transforms,\n)\n\nfrom torchaudio.backend import (\n list_audio_backends,\n get_audio_backend,\n set_audio_backend,\n)\n\ntry:\n from .version import __version__, git_version # noqa: F401\nexcept ImportError:\n pass\n\n__all__ = [\n 'compliance',\n 'datasets',\n 'functional',\n 'kaldi_io',\n 'utils',\n 'sox_effects',\n 'transforms',\n 'list_audio_backends',\n 'get_audio_backend',\n 'set_audio_backend',\n 'save_encinfo',\n 'sox_signalinfo_t',\n 'sox_encodinginfo_t',\n 'get_sox_option_t',\n 'get_sox_encoding_t',\n 'get_sox_bool',\n 'SignalInfo',\n 'EncodingInfo',\n]\n", "path": "torchaudio/__init__.py"}], "after_files": [{"content": "from . import extension # noqa: F401\nfrom torchaudio._internal import module_utils as _mod_utils # noqa: F401\nfrom torchaudio import (\n compliance,\n datasets,\n functional,\n kaldi_io,\n utils,\n sox_effects,\n transforms,\n)\n\nfrom torchaudio.backend import (\n list_audio_backends,\n get_audio_backend,\n set_audio_backend,\n)\n\ntry:\n from .version import __version__, git_version # noqa: F401\nexcept ImportError:\n pass\n\n__all__ = [\n 'compliance',\n 'datasets',\n 'functional',\n 'kaldi_io',\n 'utils',\n 'sox_effects',\n 'transforms',\n 'list_audio_backends',\n 'get_audio_backend',\n 'set_audio_backend',\n]\n", "path": "torchaudio/__init__.py"}]} | 663 | 140 |
gh_patches_debug_4807 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5045 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update CKV_AZURE_43 `each.`
**Describe the issue**
CKV_AZURE_43 StorageAccountName.py VARIABLE_REFS list does not include the `each.` used with for_each meta argument to return UNKNOWN and currently returns FAILED check which is incorrect.
**Examples**
```
module "bootstrap" {
source = "../../modules/bootstrap"
for_each = var.bootstrap_storage
create_storage_account = try(each.value.create_storage, true)
name = each.value.name
resource_group_name = try(each.value.resource_group_name, local.resource_group.name)
location = var.location
storage_acl = try(each.value.storage_acl, false)
tags = var.tags
}
```
Within the bootstrap module - we use the `azurerm_storage_account` :
```
resource "azurerm_storage_account" "this" {
count = var.create_storage_account ? 1 : 0
name = var.name
location = var.location
resource_group_name = var.resource_group_name
min_tls_version = var.min_tls_version
account_replication_type = "LRS"
account_tier = "Standard"
tags = var.tags
queue_properties {
logging {
delete = true
read = true
write = true
version = "1.0"
retention_policy_days = var.retention_policy_days
}
}
network_rules {
default_action = var.storage_acl == true ? "Deny" : "Allow"
ip_rules = var.storage_acl == true ? var.storage_allow_inbound_public_ips : null
virtual_network_subnet_ids = var.storage_acl == true ? var.storage_allow_vnet_subnets : null
}
}
```
And Checkov returns this :
```
Check: CKV_AZURE_43: "Ensure Storage Accounts adhere to the naming rules"
FAILED for resource: module.bootstrap.azurerm_storage_account.this
File: /modules/bootstrap/main.tf:1-25
Calling File: /examples/standalone_vm/main.tf:192-204
Guide: https://docs.bridgecrew.io/docs/ensure-storage-accounts-adhere-to-the-naming-rules
```
**Version (please complete the following information):**
- Checkov Version 2.2.125
**Additional context**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/StorageAccountName.py`
Content:
```
1 import re
2 from typing import List, Dict, Any
3
4 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
5 from checkov.common.models.enums import CheckResult, CheckCategories
6
7 STO_NAME_REGEX = re.compile(r"^[a-z0-9]{3,24}$")
8 VARIABLE_REFS = ("local.", "module.", "var.", "random_string.", "random_id.", "random_integer.", "random_pet.",
9 "azurecaf_name")
10
11
12 class StorageAccountName(BaseResourceCheck):
13 def __init__(self) -> None:
14 name = "Ensure Storage Accounts adhere to the naming rules"
15 id = "CKV_AZURE_43"
16 supported_resources = ["azurerm_storage_account"]
17 categories = [CheckCategories.CONVENTION]
18 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
19
20 def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:
21 """
22 The Storage Account naming reference:
23 https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts
24 :param conf: azurerm_storage_account configuration
25 :return: <CheckResult>
26 """
27 name = conf.get("name")
28 if name:
29 name = str(name[0])
30 if any(x in name for x in VARIABLE_REFS):
31 # in the case we couldn't evaluate the name, just ignore
32 return CheckResult.UNKNOWN
33 if re.findall(STO_NAME_REGEX, str(conf["name"][0])):
34 return CheckResult.PASSED
35
36 return CheckResult.FAILED
37
38 def get_evaluated_keys(self) -> List[str]:
39 return ["name"]
40
41
42 check = StorageAccountName()
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py
--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py
+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py
@@ -6,7 +6,7 @@
STO_NAME_REGEX = re.compile(r"^[a-z0-9]{3,24}$")
VARIABLE_REFS = ("local.", "module.", "var.", "random_string.", "random_id.", "random_integer.", "random_pet.",
- "azurecaf_name")
+ "azurecaf_name", "each.")
class StorageAccountName(BaseResourceCheck):
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/StorageAccountName.py b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n--- a/checkov/terraform/checks/resource/azure/StorageAccountName.py\n+++ b/checkov/terraform/checks/resource/azure/StorageAccountName.py\n@@ -6,7 +6,7 @@\n \n STO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\n VARIABLE_REFS = (\"local.\", \"module.\", \"var.\", \"random_string.\", \"random_id.\", \"random_integer.\", \"random_pet.\",\n- \"azurecaf_name\")\n+ \"azurecaf_name\", \"each.\")\n \n \n class StorageAccountName(BaseResourceCheck):\n", "issue": "Update CKV_AZURE_43 `each.`\n**Describe the issue**\r\nCKV_AZURE_43 StorageAccountName.py VARIABLE_REFS list does not include the `each.` used with for_each meta argument to return UNKNOWN and currently returns FAILED check which is incorrect.\r\n\r\n**Examples**\r\n\r\n```\r\nmodule \"bootstrap\" {\r\n source = \"../../modules/bootstrap\"\r\n\r\n for_each = var.bootstrap_storage\r\n\r\n create_storage_account = try(each.value.create_storage, true)\r\n name = each.value.name\r\n resource_group_name = try(each.value.resource_group_name, local.resource_group.name)\r\n location = var.location\r\n storage_acl = try(each.value.storage_acl, false)\r\n\r\n tags = var.tags\r\n}\r\n```\r\n\r\nWithin the bootstrap module - we use the `azurerm_storage_account` :\r\n\r\n```\r\nresource \"azurerm_storage_account\" \"this\" {\r\n count = var.create_storage_account ? 1 : 0\r\n\r\n name = var.name\r\n location = var.location\r\n resource_group_name = var.resource_group_name\r\n min_tls_version = var.min_tls_version\r\n account_replication_type = \"LRS\"\r\n account_tier = \"Standard\"\r\n tags = var.tags\r\n queue_properties {\r\n logging {\r\n delete = true\r\n read = true\r\n write = true\r\n version = \"1.0\"\r\n retention_policy_days = var.retention_policy_days\r\n }\r\n }\r\n network_rules {\r\n default_action = var.storage_acl == true ? \"Deny\" : \"Allow\"\r\n ip_rules = var.storage_acl == true ? var.storage_allow_inbound_public_ips : null\r\n virtual_network_subnet_ids = var.storage_acl == true ? var.storage_allow_vnet_subnets : null\r\n }\r\n}\r\n```\r\n\r\nAnd Checkov returns this :\r\n\r\n```\r\nCheck: CKV_AZURE_43: \"Ensure Storage Accounts adhere to the naming rules\"\r\n FAILED for resource: module.bootstrap.azurerm_storage_account.this\r\n File: /modules/bootstrap/main.tf:1-25\r\n Calling File: /examples/standalone_vm/main.tf:192-204\r\n Guide: https://docs.bridgecrew.io/docs/ensure-storage-accounts-adhere-to-the-naming-rules\r\n```\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.2.125\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "import re\nfrom typing import List, Dict, Any\n\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\nSTO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\nVARIABLE_REFS = (\"local.\", \"module.\", \"var.\", \"random_string.\", \"random_id.\", \"random_integer.\", \"random_pet.\",\n \"azurecaf_name\")\n\n\nclass StorageAccountName(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Storage Accounts adhere to the naming rules\"\n id = \"CKV_AZURE_43\"\n supported_resources = [\"azurerm_storage_account\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n \"\"\"\n The Storage Account naming reference:\n https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts\n :param conf: azurerm_storage_account configuration\n :return: <CheckResult>\n \"\"\"\n name = conf.get(\"name\")\n if name:\n name = str(name[0])\n if any(x in name for x in VARIABLE_REFS):\n # in the case we couldn't evaluate the name, just ignore\n return CheckResult.UNKNOWN\n if re.findall(STO_NAME_REGEX, str(conf[\"name\"][0])):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n def get_evaluated_keys(self) -> List[str]:\n return [\"name\"]\n\n\ncheck = StorageAccountName()\n", "path": "checkov/terraform/checks/resource/azure/StorageAccountName.py"}], "after_files": [{"content": "import re\nfrom typing import List, Dict, Any\n\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\nSTO_NAME_REGEX = re.compile(r\"^[a-z0-9]{3,24}$\")\nVARIABLE_REFS = (\"local.\", \"module.\", \"var.\", \"random_string.\", \"random_id.\", \"random_integer.\", \"random_pet.\",\n \"azurecaf_name\", \"each.\")\n\n\nclass StorageAccountName(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure Storage Accounts adhere to the naming rules\"\n id = \"CKV_AZURE_43\"\n supported_resources = [\"azurerm_storage_account\"]\n categories = [CheckCategories.CONVENTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: Dict[str, Any]) -> CheckResult:\n \"\"\"\n The Storage Account naming reference:\n https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts\n :param conf: azurerm_storage_account configuration\n :return: <CheckResult>\n \"\"\"\n name = conf.get(\"name\")\n if name:\n name = str(name[0])\n if any(x in name for x in VARIABLE_REFS):\n # in the case we couldn't evaluate the name, just ignore\n return CheckResult.UNKNOWN\n if re.findall(STO_NAME_REGEX, str(conf[\"name\"][0])):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n def get_evaluated_keys(self) -> List[str]:\n return [\"name\"]\n\n\ncheck = StorageAccountName()\n", "path": "checkov/terraform/checks/resource/azure/StorageAccountName.py"}]} | 1,234 | 156 |
gh_patches_debug_14869 | rasdani/github-patches | git_diff | freedomofpress__securedrop-634 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent use of "codename" vs "code name"
We should use either "codename" or "code name" consistently throughout. For example:
- on http://localhost:8080/generate it says “remember this code”
- on /generate it says "we're assigning you a unique code name.”
- on /generate it says “already have a codename?”
- on /login it says “enter your codename”
- on /lookup it says "You can submit more documents from this code name below.”
- on /lookup it says "Remember, your codename is”
I prefer "codename".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/source.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import os
3 from datetime import datetime
4 import uuid
5 from functools import wraps
6 import zipfile
7 from cStringIO import StringIO
8 import subprocess
9
10 import logging
11 # This module's logger is explicitly labeled so the correct logger is used,
12 # even when this is run from the command line (e.g. during development)
13 log = logging.getLogger('source')
14
15 from flask import (Flask, request, render_template, session, redirect, url_for,
16 flash, abort, g, send_file)
17 from flask_wtf.csrf import CsrfProtect
18
19 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
20 from sqlalchemy.exc import IntegrityError
21
22 import config
23 import version
24 import crypto_util
25 import store
26 import background
27 import template_filters
28 from db import db_session, Source, Submission
29 from jinja2 import evalcontextfilter
30
31 app = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)
32 app.config.from_object(config.SourceInterfaceFlaskConfig)
33 CsrfProtect(app)
34
35 SUBMIT_DOC_NOTIFY_STR = "Thanks! We received your document"
36 SUBMIT_MSG_NOTIFY_STR = "Thanks! We received your message"
37 SUBMIT_CODENAME_NOTIFY_STR = "Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages."
38
39 app.jinja_env.globals['version'] = version.__version__
40 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
41 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
42 app.jinja_env.globals['use_custom_header_image'] = True
43 else:
44 app.jinja_env.globals['header_image'] = 'logo.png'
45 app.jinja_env.globals['use_custom_header_image'] = False
46
47 app.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat
48 app.jinja_env.filters['nl2br'] = evalcontextfilter(template_filters.nl2br)
49
50 @app.teardown_appcontext
51 def shutdown_session(exception=None):
52 """Automatically remove database sessions at the end of the request, or
53 when the application shuts down"""
54 db_session.remove()
55
56
57 def logged_in():
58 return 'logged_in' in session
59
60
61 def login_required(f):
62 @wraps(f)
63 def decorated_function(*args, **kwargs):
64 if not logged_in():
65 return redirect(url_for('login'))
66 return f(*args, **kwargs)
67 return decorated_function
68
69
70 def ignore_static(f):
71 """Only executes the wrapped function if we're not loading a static resource."""
72 @wraps(f)
73 def decorated_function(*args, **kwargs):
74 if request.path.startswith('/static'):
75 return # don't execute the decorated function
76 return f(*args, **kwargs)
77 return decorated_function
78
79
80 @app.before_request
81 @ignore_static
82 def setup_g():
83 """Store commonly used values in Flask's special g object"""
84 # ignore_static here because `crypto_util.hash_codename` is scrypt (very
85 # time consuming), and we don't need to waste time running if we're just
86 # serving a static resource that won't need to access these common values.
87 if logged_in():
88 g.codename = session['codename']
89 g.sid = crypto_util.hash_codename(g.codename)
90 try:
91 g.source = Source.query.filter(Source.filesystem_id == g.sid).one()
92 except MultipleResultsFound as e:
93 app.logger.error("Found multiple Sources when one was expected: %s" % (e,))
94 abort(500)
95 except NoResultFound as e:
96 app.logger.error("Found no Sources when one was expected: %s" % (e,))
97 del session['logged_in']
98 del session['codename']
99 return redirect(url_for('index'))
100 g.loc = store.path(g.sid)
101
102
103 @app.before_request
104 @ignore_static
105 def check_tor2web():
106 # ignore_static here so we only flash a single message warning about Tor2Web,
107 # corresponding to the intial page load.
108 if 'X-tor2web' in request.headers:
109 flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '
110 'This <strong>does not</strong> provide anonymity. '
111 '<a href="/tor2web-warning">Why is this dangerous?</a>',
112 "banner-warning")
113
114
115 @app.route('/')
116 def index():
117 return render_template('index.html')
118
119
120 def generate_unique_codename(num_words):
121 """Generate random codenames until we get an unused one"""
122 while True:
123 codename = crypto_util.genrandomid(num_words)
124 sid = crypto_util.hash_codename(codename) # scrypt (slow)
125 matching_sources = Source.query.filter(Source.filesystem_id == sid).all()
126 if len(matching_sources) == 0:
127 return codename
128
129
130 @app.route('/generate', methods=('GET', 'POST'))
131 def generate():
132 # Popping this key prevents errors when a logged in user returns to /generate.
133 # TODO: is this the best experience? A logged in user will be automatically
134 # logged out if they navigate to /generate by accident, which could be
135 # confusing. It might be better to instead redirect them to the lookup
136 # page, or inform them that they're logged in.
137 session.pop('logged_in', None)
138
139 number_words = 8
140 if request.method == 'POST':
141 number_words = int(request.form['number-words'])
142 if number_words not in range(7, 11):
143 abort(403)
144
145 codename = generate_unique_codename(number_words)
146 session['codename'] = codename
147 return render_template('generate.html', codename=codename)
148
149
150 @app.route('/create', methods=['POST'])
151 def create():
152 sid = crypto_util.hash_codename(session['codename'])
153
154 source = Source(sid, crypto_util.display_id())
155 db_session.add(source)
156 try:
157 db_session.commit()
158 except IntegrityError as e:
159 app.logger.error("Attempt to create a source with duplicate codename: %s" % (e,))
160 else:
161 os.mkdir(store.path(sid))
162
163 session['logged_in'] = True
164 return redirect(url_for('lookup'))
165
166
167 @app.route('/lookup', methods=('GET',))
168 @login_required
169 def lookup():
170 replies = []
171 for fn in os.listdir(g.loc):
172 if fn.endswith('-reply.gpg'):
173 try:
174 msg = crypto_util.decrypt(g.codename,
175 file(store.path(g.sid, fn)).read()).decode("utf-8")
176 except UnicodeDecodeError:
177 app.logger.error("Could not decode reply %s" % fn)
178 else:
179 date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime)
180 replies.append(dict(id=fn, date=date, msg=msg))
181
182 def async_genkey(sid, codename):
183 with app.app_context():
184 background.execute(lambda: crypto_util.genkeypair(sid, codename))
185
186 # Generate a keypair to encrypt replies from the journalist
187 # Only do this if the journalist has flagged the source as one
188 # that they would like to reply to. (Issue #140.)
189 if not crypto_util.getkey(g.sid) and g.source.flagged:
190 async_genkey(g.sid, g.codename)
191
192 # if this was a redirect from the login page, flash a message if there are
193 # no replies to clarify "check for replies" flow (#393)
194 if request.args.get('from_login') == '1' and len(replies) == 0:
195 flash("There are no replies at this time. You can submit more documents from this code name below.", "notification")
196
197 return render_template('lookup.html', codename=g.codename, replies=replies,
198 flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))
199
200
201 def normalize_timestamps(sid):
202 """
203 Update the timestamps on all of the source's submissions to match that of
204 the latest submission. This minimizes metadata that could be useful to
205 investigators. See #301.
206 """
207 sub_paths = [ store.path(sid, submission.filename)
208 for submission in g.source.submissions ]
209 if len(sub_paths) > 1:
210 args = ["touch"]
211 args.extend(sub_paths[:-1])
212 rc = subprocess.call(args)
213 if rc != 0:
214 app.logger.warning("Couldn't normalize submission timestamps (touch exited with %d)" % rc)
215
216
217 @app.route('/submit', methods=('POST',))
218 @login_required
219 def submit():
220 msg = request.form['msg']
221 fh = request.files['fh']
222
223 fnames = []
224 journalist_filename = g.source.journalist_filename()
225
226 if msg:
227 g.source.interaction_count += 1
228 fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,
229 journalist_filename, msg))
230 flash("{}. {}".format(SUBMIT_MSG_NOTIFY_STR,
231 SUBMIT_CODENAME_NOTIFY_STR), "notification")
232 if fh:
233 g.source.interaction_count += 1
234 fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,
235 journalist_filename, fh.filename, fh.stream))
236 flash("{} '{}'. {}".format(SUBMIT_DOC_NOTIFY_STR,
237 fh.filename or '[unnamed]',
238 SUBMIT_CODENAME_NOTIFY_STR), "notification")
239 for fname in fnames:
240 submission = Submission(g.source, fname)
241 db_session.add(submission)
242
243 if g.source.pending:
244 g.source.pending = False
245
246 # Generate a keypair now, if there's enough entropy (issue #303)
247 entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())
248 if entropy_avail >= 2400:
249 crypto_util.genkeypair(g.sid, g.codename)
250
251 g.source.last_updated = datetime.utcnow()
252 db_session.commit()
253 normalize_timestamps(g.sid)
254
255 return redirect(url_for('lookup'))
256
257
258 @app.route('/delete', methods=('POST',))
259 @login_required
260 def delete():
261 msgid = request.form['msgid']
262 assert '/' not in msgid
263 potential_files = os.listdir(g.loc)
264 if msgid not in potential_files:
265 abort(404) # TODO are the checks necessary?
266 store.secure_unlink(store.path(g.sid, msgid))
267 flash("Reply deleted.", "notification")
268
269 return redirect(url_for('lookup'))
270
271
272 def valid_codename(codename):
273 return os.path.exists(store.path(crypto_util.hash_codename(codename)))
274
275 @app.route('/login', methods=('GET', 'POST'))
276 def login():
277 if request.method == 'POST':
278 codename = request.form['codename']
279 try:
280 valid = valid_codename(codename)
281 except crypto_util.CryptoException:
282 pass
283 else:
284 if valid:
285 session.update(codename=codename, logged_in=True)
286 return redirect(url_for('lookup', from_login='1'))
287 flash("Sorry, that is not a recognized codename.", "error")
288 return render_template('login.html')
289
290
291 @app.route('/howto-disable-js')
292 def howto_disable_js():
293 return render_template("howto-disable-js.html")
294
295
296 @app.route('/tor2web-warning')
297 def tor2web_warning():
298 return render_template("tor2web-warning.html")
299
300
301 @app.route('/journalist-key')
302 def download_journalist_pubkey():
303 journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)
304 return send_file(StringIO(journalist_pubkey),
305 mimetype="application/pgp-keys",
306 attachment_filename=config.JOURNALIST_KEY + ".asc",
307 as_attachment=True)
308
309
310 @app.route('/why-journalist-key')
311 def why_download_journalist_pubkey():
312 return render_template("why-journalist-key.html")
313
314
315 @app.errorhandler(404)
316 def page_not_found(error):
317 return render_template('notfound.html'), 404
318
319 @app.errorhandler(500)
320 def internal_error(error):
321 return render_template('error.html'), 500
322
323 def write_pidfile():
324 pid = str(os.getpid())
325 with open(config.SOURCE_PIDFILE, 'w') as fp:
326 fp.write(pid)
327
328 if __name__ == "__main__":
329 write_pidfile()
330 # TODO make sure debug is not on in production
331 app.run(debug=True, host='0.0.0.0', port=8080)
332
333
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/source.py b/securedrop/source.py
--- a/securedrop/source.py
+++ b/securedrop/source.py
@@ -192,7 +192,7 @@
# if this was a redirect from the login page, flash a message if there are
# no replies to clarify "check for replies" flow (#393)
if request.args.get('from_login') == '1' and len(replies) == 0:
- flash("There are no replies at this time. You can submit more documents from this code name below.", "notification")
+ flash("There are no replies at this time. You can submit more documents from this codename below.", "notification")
return render_template('lookup.html', codename=g.codename, replies=replies,
flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))
| {"golden_diff": "diff --git a/securedrop/source.py b/securedrop/source.py\n--- a/securedrop/source.py\n+++ b/securedrop/source.py\n@@ -192,7 +192,7 @@\n # if this was a redirect from the login page, flash a message if there are\n # no replies to clarify \"check for replies\" flow (#393)\n if request.args.get('from_login') == '1' and len(replies) == 0:\n- flash(\"There are no replies at this time. You can submit more documents from this code name below.\", \"notification\")\n+ flash(\"There are no replies at this time. You can submit more documents from this codename below.\", \"notification\")\n \n return render_template('lookup.html', codename=g.codename, replies=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n", "issue": "Inconsistent use of \"codename\" vs \"code name\"\nWe should use either \"codename\" or \"code name\" consistently throughout. For example:\n- on http://localhost:8080/generate it says \u201cremember this code\u201d\n- on /generate it says \"we're assigning you a unique code name.\u201d\n- on /generate it says \u201calready have a codename?\u201d\n- on /login it says \u201center your codename\u201d\n- on /lookup it says \"You can submit more documents from this code name below.\u201d\n- on /lookup it says \"Remember, your codename is\u201d\n\nI prefer \"codename\".\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nfrom datetime import datetime\nimport uuid\nfrom functools import wraps\nimport zipfile\nfrom cStringIO import StringIO\nimport subprocess\n\nimport logging\n# This module's logger is explicitly labeled so the correct logger is used,\n# even when this is run from the command line (e.g. during development)\nlog = logging.getLogger('source')\n\nfrom flask import (Flask, request, render_template, session, redirect, url_for,\n flash, abort, g, send_file)\nfrom flask_wtf.csrf import CsrfProtect\n\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\nfrom sqlalchemy.exc import IntegrityError\n\nimport config\nimport version\nimport crypto_util\nimport store\nimport background\nimport template_filters\nfrom db import db_session, Source, Submission\nfrom jinja2 import evalcontextfilter\n\napp = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)\napp.config.from_object(config.SourceInterfaceFlaskConfig)\nCsrfProtect(app)\n\nSUBMIT_DOC_NOTIFY_STR = \"Thanks! We received your document\"\nSUBMIT_MSG_NOTIFY_STR = \"Thanks! We received your message\"\nSUBMIT_CODENAME_NOTIFY_STR = \"Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages.\"\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\napp.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat\napp.jinja_env.filters['nl2br'] = evalcontextfilter(template_filters.nl2br)\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef logged_in():\n return 'logged_in' in session\n\n\ndef login_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if not logged_in():\n return redirect(url_for('login'))\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef ignore_static(f):\n \"\"\"Only executes the wrapped function if we're not loading a static resource.\"\"\"\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if request.path.startswith('/static'):\n return # don't execute the decorated function\n return f(*args, **kwargs)\n return decorated_function\n\n\[email protected]_request\n@ignore_static\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n # ignore_static here because `crypto_util.hash_codename` is scrypt (very\n # time consuming), and we don't need to waste time running if we're just\n # serving a static resource that won't need to access these common values.\n if logged_in():\n g.codename = session['codename']\n g.sid = crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query.filter(Source.filesystem_id == g.sid).one()\n except MultipleResultsFound as e:\n app.logger.error(\"Found multiple Sources when one was expected: %s\" % (e,))\n abort(500)\n except NoResultFound as e:\n app.logger.error(\"Found no Sources when one was expected: %s\" % (e,))\n del session['logged_in']\n del session['codename']\n return redirect(url_for('index'))\n g.loc = store.path(g.sid)\n\n\[email protected]_request\n@ignore_static\ndef check_tor2web():\n # ignore_static here so we only flash a single message warning about Tor2Web,\n # corresponding to the intial page load.\n if 'X-tor2web' in request.headers:\n flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '\n 'This <strong>does not</strong> provide anonymity. '\n '<a href=\"/tor2web-warning\">Why is this dangerous?</a>',\n \"banner-warning\")\n\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\n\ndef generate_unique_codename(num_words):\n \"\"\"Generate random codenames until we get an unused one\"\"\"\n while True:\n codename = crypto_util.genrandomid(num_words)\n sid = crypto_util.hash_codename(codename) # scrypt (slow)\n matching_sources = Source.query.filter(Source.filesystem_id == sid).all()\n if len(matching_sources) == 0:\n return codename\n\n\[email protected]('/generate', methods=('GET', 'POST'))\ndef generate():\n # Popping this key prevents errors when a logged in user returns to /generate.\n # TODO: is this the best experience? A logged in user will be automatically\n # logged out if they navigate to /generate by accident, which could be\n # confusing. It might be better to instead redirect them to the lookup\n # page, or inform them that they're logged in.\n session.pop('logged_in', None)\n\n number_words = 8\n if request.method == 'POST':\n number_words = int(request.form['number-words'])\n if number_words not in range(7, 11):\n abort(403)\n\n codename = generate_unique_codename(number_words)\n session['codename'] = codename\n return render_template('generate.html', codename=codename)\n\n\[email protected]('/create', methods=['POST'])\ndef create():\n sid = crypto_util.hash_codename(session['codename'])\n\n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n try:\n db_session.commit()\n except IntegrityError as e: \n app.logger.error(\"Attempt to create a source with duplicate codename: %s\" % (e,))\n else:\n os.mkdir(store.path(sid))\n\n session['logged_in'] = True\n return redirect(url_for('lookup'))\n\n\[email protected]('/lookup', methods=('GET',))\n@login_required\ndef lookup():\n replies = []\n for fn in os.listdir(g.loc):\n if fn.endswith('-reply.gpg'):\n try:\n msg = crypto_util.decrypt(g.codename,\n file(store.path(g.sid, fn)).read()).decode(\"utf-8\")\n except UnicodeDecodeError:\n app.logger.error(\"Could not decode reply %s\" % fn)\n else:\n date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime)\n replies.append(dict(id=fn, date=date, msg=msg))\n\n def async_genkey(sid, codename):\n with app.app_context():\n background.execute(lambda: crypto_util.genkeypair(sid, codename))\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not crypto_util.getkey(g.sid) and g.source.flagged:\n async_genkey(g.sid, g.codename)\n\n # if this was a redirect from the login page, flash a message if there are\n # no replies to clarify \"check for replies\" flow (#393)\n if request.args.get('from_login') == '1' and len(replies) == 0:\n flash(\"There are no replies at this time. You can submit more documents from this code name below.\", \"notification\")\n\n return render_template('lookup.html', codename=g.codename, replies=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n\n\ndef normalize_timestamps(sid):\n \"\"\"\n Update the timestamps on all of the source's submissions to match that of\n the latest submission. This minimizes metadata that could be useful to\n investigators. See #301.\n \"\"\"\n sub_paths = [ store.path(sid, submission.filename)\n for submission in g.source.submissions ]\n if len(sub_paths) > 1:\n args = [\"touch\"]\n args.extend(sub_paths[:-1])\n rc = subprocess.call(args)\n if rc != 0:\n app.logger.warning(\"Couldn't normalize submission timestamps (touch exited with %d)\" % rc)\n\n\[email protected]('/submit', methods=('POST',))\n@login_required\ndef submit():\n msg = request.form['msg']\n fh = request.files['fh']\n\n fnames = []\n journalist_filename = g.source.journalist_filename()\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,\n journalist_filename, msg))\n flash(\"{}. {}\".format(SUBMIT_MSG_NOTIFY_STR,\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n if fh:\n g.source.interaction_count += 1\n fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,\n journalist_filename, fh.filename, fh.stream))\n flash(\"{} '{}'. {}\".format(SUBMIT_DOC_NOTIFY_STR,\n fh.filename or '[unnamed]',\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n for fname in fnames:\n submission = Submission(g.source, fname)\n db_session.add(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())\n if entropy_avail >= 2400:\n crypto_util.genkeypair(g.sid, g.codename)\n\n g.source.last_updated = datetime.utcnow()\n db_session.commit()\n normalize_timestamps(g.sid)\n\n return redirect(url_for('lookup'))\n\n\[email protected]('/delete', methods=('POST',))\n@login_required\ndef delete():\n msgid = request.form['msgid']\n assert '/' not in msgid\n potential_files = os.listdir(g.loc)\n if msgid not in potential_files:\n abort(404) # TODO are the checks necessary?\n store.secure_unlink(store.path(g.sid, msgid))\n flash(\"Reply deleted.\", \"notification\")\n\n return redirect(url_for('lookup'))\n\n\ndef valid_codename(codename):\n return os.path.exists(store.path(crypto_util.hash_codename(codename)))\n\[email protected]('/login', methods=('GET', 'POST'))\ndef login():\n if request.method == 'POST':\n codename = request.form['codename']\n try:\n valid = valid_codename(codename)\n except crypto_util.CryptoException:\n pass\n else:\n if valid:\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('lookup', from_login='1'))\n flash(\"Sorry, that is not a recognized codename.\", \"error\")\n return render_template('login.html')\n\n\[email protected]('/howto-disable-js')\ndef howto_disable_js():\n return render_template(\"howto-disable-js.html\")\n\n\[email protected]('/tor2web-warning')\ndef tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n\[email protected]('/journalist-key')\ndef download_journalist_pubkey():\n journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n\[email protected]('/why-journalist-key')\ndef why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n\[email protected](404)\ndef page_not_found(error):\n return render_template('notfound.html'), 404\n\[email protected](500)\ndef internal_error(error):\n return render_template('error.html'), 500\n\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.SOURCE_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug is not on in production\n app.run(debug=True, host='0.0.0.0', port=8080)\n\n", "path": "securedrop/source.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nfrom datetime import datetime\nimport uuid\nfrom functools import wraps\nimport zipfile\nfrom cStringIO import StringIO\nimport subprocess\n\nimport logging\n# This module's logger is explicitly labeled so the correct logger is used,\n# even when this is run from the command line (e.g. during development)\nlog = logging.getLogger('source')\n\nfrom flask import (Flask, request, render_template, session, redirect, url_for,\n flash, abort, g, send_file)\nfrom flask_wtf.csrf import CsrfProtect\n\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\nfrom sqlalchemy.exc import IntegrityError\n\nimport config\nimport version\nimport crypto_util\nimport store\nimport background\nimport template_filters\nfrom db import db_session, Source, Submission\nfrom jinja2 import evalcontextfilter\n\napp = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)\napp.config.from_object(config.SourceInterfaceFlaskConfig)\nCsrfProtect(app)\n\nSUBMIT_DOC_NOTIFY_STR = \"Thanks! We received your document\"\nSUBMIT_MSG_NOTIFY_STR = \"Thanks! We received your message\"\nSUBMIT_CODENAME_NOTIFY_STR = \"Please remember your codename: you can use it to log back into this site to read responses from us and to submit follow-up documents and messages.\"\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\napp.jinja_env.filters['datetimeformat'] = template_filters.datetimeformat\napp.jinja_env.filters['nl2br'] = evalcontextfilter(template_filters.nl2br)\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef logged_in():\n return 'logged_in' in session\n\n\ndef login_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if not logged_in():\n return redirect(url_for('login'))\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef ignore_static(f):\n \"\"\"Only executes the wrapped function if we're not loading a static resource.\"\"\"\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if request.path.startswith('/static'):\n return # don't execute the decorated function\n return f(*args, **kwargs)\n return decorated_function\n\n\[email protected]_request\n@ignore_static\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n # ignore_static here because `crypto_util.hash_codename` is scrypt (very\n # time consuming), and we don't need to waste time running if we're just\n # serving a static resource that won't need to access these common values.\n if logged_in():\n g.codename = session['codename']\n g.sid = crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query.filter(Source.filesystem_id == g.sid).one()\n except MultipleResultsFound as e:\n app.logger.error(\"Found multiple Sources when one was expected: %s\" % (e,))\n abort(500)\n except NoResultFound as e:\n app.logger.error(\"Found no Sources when one was expected: %s\" % (e,))\n del session['logged_in']\n del session['codename']\n return redirect(url_for('index'))\n g.loc = store.path(g.sid)\n\n\[email protected]_request\n@ignore_static\ndef check_tor2web():\n # ignore_static here so we only flash a single message warning about Tor2Web,\n # corresponding to the intial page load.\n if 'X-tor2web' in request.headers:\n flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '\n 'This <strong>does not</strong> provide anonymity. '\n '<a href=\"/tor2web-warning\">Why is this dangerous?</a>',\n \"banner-warning\")\n\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\n\ndef generate_unique_codename(num_words):\n \"\"\"Generate random codenames until we get an unused one\"\"\"\n while True:\n codename = crypto_util.genrandomid(num_words)\n sid = crypto_util.hash_codename(codename) # scrypt (slow)\n matching_sources = Source.query.filter(Source.filesystem_id == sid).all()\n if len(matching_sources) == 0:\n return codename\n\n\[email protected]('/generate', methods=('GET', 'POST'))\ndef generate():\n # Popping this key prevents errors when a logged in user returns to /generate.\n # TODO: is this the best experience? A logged in user will be automatically\n # logged out if they navigate to /generate by accident, which could be\n # confusing. It might be better to instead redirect them to the lookup\n # page, or inform them that they're logged in.\n session.pop('logged_in', None)\n\n number_words = 8\n if request.method == 'POST':\n number_words = int(request.form['number-words'])\n if number_words not in range(7, 11):\n abort(403)\n\n codename = generate_unique_codename(number_words)\n session['codename'] = codename\n return render_template('generate.html', codename=codename)\n\n\[email protected]('/create', methods=['POST'])\ndef create():\n sid = crypto_util.hash_codename(session['codename'])\n\n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n try:\n db_session.commit()\n except IntegrityError as e: \n app.logger.error(\"Attempt to create a source with duplicate codename: %s\" % (e,))\n else:\n os.mkdir(store.path(sid))\n\n session['logged_in'] = True\n return redirect(url_for('lookup'))\n\n\[email protected]('/lookup', methods=('GET',))\n@login_required\ndef lookup():\n replies = []\n for fn in os.listdir(g.loc):\n if fn.endswith('-reply.gpg'):\n try:\n msg = crypto_util.decrypt(g.codename,\n file(store.path(g.sid, fn)).read()).decode(\"utf-8\")\n except UnicodeDecodeError:\n app.logger.error(\"Could not decode reply %s\" % fn)\n else:\n date = datetime.fromtimestamp(os.stat(store.path(g.sid, fn)).st_mtime)\n replies.append(dict(id=fn, date=date, msg=msg))\n\n def async_genkey(sid, codename):\n with app.app_context():\n background.execute(lambda: crypto_util.genkeypair(sid, codename))\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not crypto_util.getkey(g.sid) and g.source.flagged:\n async_genkey(g.sid, g.codename)\n\n # if this was a redirect from the login page, flash a message if there are\n # no replies to clarify \"check for replies\" flow (#393)\n if request.args.get('from_login') == '1' and len(replies) == 0:\n flash(\"There are no replies at this time. You can submit more documents from this codename below.\", \"notification\")\n\n return render_template('lookup.html', codename=g.codename, replies=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n\n\ndef normalize_timestamps(sid):\n \"\"\"\n Update the timestamps on all of the source's submissions to match that of\n the latest submission. This minimizes metadata that could be useful to\n investigators. See #301.\n \"\"\"\n sub_paths = [ store.path(sid, submission.filename)\n for submission in g.source.submissions ]\n if len(sub_paths) > 1:\n args = [\"touch\"]\n args.extend(sub_paths[:-1])\n rc = subprocess.call(args)\n if rc != 0:\n app.logger.warning(\"Couldn't normalize submission timestamps (touch exited with %d)\" % rc)\n\n\[email protected]('/submit', methods=('POST',))\n@login_required\ndef submit():\n msg = request.form['msg']\n fh = request.files['fh']\n\n fnames = []\n journalist_filename = g.source.journalist_filename()\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(store.save_message_submission(g.sid, g.source.interaction_count,\n journalist_filename, msg))\n flash(\"{}. {}\".format(SUBMIT_MSG_NOTIFY_STR,\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n if fh:\n g.source.interaction_count += 1\n fnames.append(store.save_file_submission(g.sid, g.source.interaction_count,\n journalist_filename, fh.filename, fh.stream))\n flash(\"{} '{}'. {}\".format(SUBMIT_DOC_NOTIFY_STR,\n fh.filename or '[unnamed]',\n SUBMIT_CODENAME_NOTIFY_STR), \"notification\")\n for fname in fnames:\n submission = Submission(g.source, fname)\n db_session.add(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n entropy_avail = int(open('/proc/sys/kernel/random/entropy_avail').read())\n if entropy_avail >= 2400:\n crypto_util.genkeypair(g.sid, g.codename)\n\n g.source.last_updated = datetime.utcnow()\n db_session.commit()\n normalize_timestamps(g.sid)\n\n return redirect(url_for('lookup'))\n\n\[email protected]('/delete', methods=('POST',))\n@login_required\ndef delete():\n msgid = request.form['msgid']\n assert '/' not in msgid\n potential_files = os.listdir(g.loc)\n if msgid not in potential_files:\n abort(404) # TODO are the checks necessary?\n store.secure_unlink(store.path(g.sid, msgid))\n flash(\"Reply deleted.\", \"notification\")\n\n return redirect(url_for('lookup'))\n\n\ndef valid_codename(codename):\n return os.path.exists(store.path(crypto_util.hash_codename(codename)))\n\[email protected]('/login', methods=('GET', 'POST'))\ndef login():\n if request.method == 'POST':\n codename = request.form['codename']\n try:\n valid = valid_codename(codename)\n except crypto_util.CryptoException:\n pass\n else:\n if valid:\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('lookup', from_login='1'))\n flash(\"Sorry, that is not a recognized codename.\", \"error\")\n return render_template('login.html')\n\n\[email protected]('/howto-disable-js')\ndef howto_disable_js():\n return render_template(\"howto-disable-js.html\")\n\n\[email protected]('/tor2web-warning')\ndef tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n\[email protected]('/journalist-key')\ndef download_journalist_pubkey():\n journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n\[email protected]('/why-journalist-key')\ndef why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n\[email protected](404)\ndef page_not_found(error):\n return render_template('notfound.html'), 404\n\[email protected](500)\ndef internal_error(error):\n return render_template('error.html'), 500\n\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.SOURCE_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug is not on in production\n app.run(debug=True, host='0.0.0.0', port=8080)\n\n", "path": "securedrop/source.py"}]} | 3,974 | 192 |
gh_patches_debug_18039 | rasdani/github-patches | git_diff | TOMToolkit__tom_base-99 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"Since Time" filter in MARS query
Using the MARS query, the results do not depend on the value chosen for the "Since Time" filter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tom_alerts/brokers/mars.py`
Content:
```
1 import requests
2 import json
3 from requests.exceptions import HTTPError
4 from urllib.parse import urlencode
5 from dateutil.parser import parse
6 from django import forms
7 from crispy_forms.layout import Layout, Div, Fieldset, HTML
8 from astropy.time import Time, TimezoneInfo
9
10 from tom_alerts.alerts import GenericQueryForm, GenericAlert
11 from tom_targets.models import Target, TargetExtra
12 from tom_dataproducts.models import ReducedDatum
13
14 MARS_URL = 'https://mars.lco.global'
15 filters = {0: 'g', 1: 'r', 2: 'i'}
16
17
18 class MARSQueryForm(GenericQueryForm):
19 time__gt = forms.CharField(
20 required=False,
21 label='Time Lower',
22 widget=forms.TextInput(attrs={'type': 'date'})
23 )
24 time__lt = forms.CharField(
25 required=False,
26 label='Time Upper',
27 widget=forms.TextInput(attrs={'type': 'date'})
28 )
29 since__time = forms.IntegerField(
30 required=False,
31 label='Since Time',
32 help_text='Alerts younger than this number of seconds'
33 )
34 jd__gt = forms.FloatField(required=False, label='JD Lower')
35 jd__lt = forms.FloatField(required=False, label='JD Upper')
36 filter = forms.CharField(required=False)
37 cone = forms.CharField(
38 required=False,
39 label='Cone Search',
40 help_text='RA,Dec,radius in degrees'
41 )
42 objectcone = forms.CharField(
43 required=False,
44 label='Object Cone Search',
45 help_text='Object name,radius in degrees'
46 )
47 objectidps = forms.CharField(
48 required=False,
49 label='Nearby Objects',
50 help_text='Id from PS1 catalog'
51 )
52 ra__gt = forms.FloatField(required=False, label='RA Lower')
53 ra__lt = forms.FloatField(required=False, label='RA Upper')
54 dec__gt = forms.FloatField(required=False, label='Dec Lower')
55 dec__lt = forms.FloatField(required=False, label='Dec Upper')
56 l__gt = forms.FloatField(required=False, label='l Lower')
57 l__lt = forms.FloatField(required=False, label='l Upper')
58 b__gt = forms.FloatField(required=False, label='b Lower')
59 b__lt = forms.FloatField(required=False, label='b Upper')
60 magpsf__gte = forms.FloatField(required=False, label='Magpsf Lower')
61 magpsf__lte = forms.FloatField(required=False, label='Magpsf Upper')
62 sigmapsf__lte = forms.FloatField(required=False, label='Sigmapsf Upper')
63 magap__gte = forms.FloatField(required=False, label='Magap Lower')
64 magap__lte = forms.FloatField(required=False, label='Magap Upper')
65 distnr__gte = forms.FloatField(required=False, label='Distnr Lower')
66 distnr__lte = forms.FloatField(required=False, label='Distnr Upper')
67 deltamaglatest__gte = forms.FloatField(
68 required=False,
69 label='Delta Mag Lower'
70 )
71 deltamaglatest__lte = forms.FloatField(
72 required=False,
73 label='Delta Mag Upper'
74 )
75 deltamagref__gte = forms.FloatField(
76 required=False,
77 label='Delta Mag Ref Lower'
78 )
79 deltamagref__lte = forms.FloatField(
80 required=False,
81 label='Delta Mag Ref Upper'
82 )
83 rb__gte = forms.FloatField(required=False, label='Real/Bogus Lower')
84 classtar__gte = forms.FloatField(required=False, label='Classtar Lower')
85 classtar__lte = forms.FloatField(required=False, label='Classtar Upper')
86 fwhm__lte = forms.FloatField(required=False, label='FWHM Upper')
87
88 def __init__(self, *args, **kwargs):
89 super().__init__(*args, **kwargs)
90 self.helper.layout = Layout(
91 HTML('''
92 <p>
93 Please see <a href="https://mars.lco.global/help">MARS help</a>
94 for a detailed description of available filters.
95 </p>
96 '''),
97 self.common_layout,
98 Fieldset(
99 'Time based filters',
100 'since__time',
101 Div(
102 Div(
103 'time__gt',
104 'jd__gt',
105 css_class='col',
106 ),
107 Div(
108 'time__lt',
109 'jd__lt',
110 css_class='col',
111 ),
112 css_class="form-row",
113 )
114 ),
115 Fieldset(
116 'Location based filters',
117 'cone',
118 'objectcone',
119 'objectidps',
120 Div(
121 Div(
122 'ra__gt',
123 'dec__gt',
124 'l__gt',
125 'b__gt',
126 css_class='col',
127 ),
128 Div(
129 'ra__lt',
130 'dec__lt',
131 'l__lt',
132 'b__lt',
133 css_class='col',
134 ),
135 css_class="form-row",
136 )
137 ),
138 Fieldset(
139 'Other Filters',
140 Div(
141 Div(
142 'magpsf__gte',
143 'magap__gte',
144 'distnr__gte',
145 'deltamaglatest__gte',
146 'deltamagref__gte',
147 'classtar__gte',
148 css_class='col'
149 ),
150 Div(
151 'magpsf__lte',
152 'magap__lte',
153 'distnr__lte',
154 'deltamaglatest__lte',
155 'deltamagref__lte',
156 'classtar__lte',
157 css_class='col'
158 ),
159 css_class='form-row',
160 )
161 ),
162 'filter',
163 'sigmapsf__lte',
164 'rb__gte',
165 'fwhm__lte'
166 )
167
168
169 class MARSBroker(object):
170 name = 'MARS'
171 form = MARSQueryForm
172
173 def _clean_parameters(self, parameters):
174 return {k: v for k, v in parameters.items() if v and k != 'page'}
175
176 def fetch_alerts(self, parameters):
177 if not parameters.get('page'):
178 parameters['page'] = 1
179 args = urlencode(self._clean_parameters(parameters))
180 url = '{0}/?page={1}&format=json&{2}'.format(
181 MARS_URL,
182 parameters['page'],
183 args
184 )
185 alerts = []
186 response = requests.get(url)
187 response.raise_for_status()
188 parsed = response.json()
189 alerts = parsed['results']
190 if parsed['has_next'] and parameters['page'] < 10:
191 parameters['page'] += 1
192 alerts += self.fetch_alerts(parameters)
193 return alerts
194
195 def fetch_alert(self, id):
196 url = f'{MARS_URL}/{id}/?format=json'
197 response = requests.get(url)
198 response.raise_for_status()
199 parsed = response.json()
200 return parsed
201
202 def process_reduced_data(self, target, alert=None):
203 if not alert:
204 try:
205 target_datum = ReducedDatum.objects.filter(
206 target=target,
207 data_type='PHOTOMETRY',
208 source_name=self.name).first()
209 if not target_datum:
210 return
211 alert = self.fetch_alert(target_datum.source_location)
212 except HTTPError:
213 raise Exception('Unable to retrieve alert information from broker')
214 for prv_candidate in alert.get('prv_candidate'):
215 if all([key in prv_candidate['candidate'] for key in ['jd', 'magpsf', 'fid']]):
216 jd = Time(prv_candidate['candidate']['jd'], format='jd', scale='utc')
217 jd.to_datetime(timezone=TimezoneInfo())
218 value = {
219 'magnitude': prv_candidate['candidate']['magpsf'],
220 'filter': filters[prv_candidate['candidate']['fid']]
221 }
222 rd, created = ReducedDatum.objects.get_or_create(
223 timestamp=jd.to_datetime(timezone=TimezoneInfo()),
224 value=json.dumps(value),
225 source_name=self.name,
226 source_location=alert['lco_id'],
227 data_type='photometry',
228 target=target)
229 rd.save()
230
231 def to_target(self, alert):
232 alert_copy = alert.copy()
233 target = Target.objects.create(
234 identifier=alert_copy['objectId'],
235 name=alert_copy['objectId'],
236 type='SIDEREAL',
237 ra=alert_copy['candidate'].pop('ra'),
238 dec=alert_copy['candidate'].pop('dec'),
239 galactic_lng=alert_copy['candidate'].pop('l'),
240 galactic_lat=alert_copy['candidate'].pop('b'),
241 )
242 for k, v in alert_copy['candidate'].items():
243 if v is not None:
244 TargetExtra.objects.create(target=target, key=k, value=v)
245
246 return target
247
248 def to_generic_alert(self, alert):
249 timestamp = parse(alert['candidate']['wall_time'])
250 url = '{0}/{1}/'.format(MARS_URL, alert['lco_id'])
251
252 return GenericAlert(
253 timestamp=timestamp,
254 url=url,
255 id=alert['lco_id'],
256 name=alert['objectId'],
257 ra=alert['candidate']['ra'],
258 dec=alert['candidate']['dec'],
259 mag=alert['candidate']['magpsf'],
260 score=alert['candidate']['rb']
261 )
262
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tom_alerts/brokers/mars.py b/tom_alerts/brokers/mars.py
--- a/tom_alerts/brokers/mars.py
+++ b/tom_alerts/brokers/mars.py
@@ -26,9 +26,9 @@
label='Time Upper',
widget=forms.TextInput(attrs={'type': 'date'})
)
- since__time = forms.IntegerField(
+ time__since = forms.IntegerField(
required=False,
- label='Since Time',
+ label='Time Since',
help_text='Alerts younger than this number of seconds'
)
jd__gt = forms.FloatField(required=False, label='JD Lower')
@@ -97,7 +97,7 @@
self.common_layout,
Fieldset(
'Time based filters',
- 'since__time',
+ 'time__since',
Div(
Div(
'time__gt',
| {"golden_diff": "diff --git a/tom_alerts/brokers/mars.py b/tom_alerts/brokers/mars.py\n--- a/tom_alerts/brokers/mars.py\n+++ b/tom_alerts/brokers/mars.py\n@@ -26,9 +26,9 @@\n label='Time Upper',\n widget=forms.TextInput(attrs={'type': 'date'})\n )\n- since__time = forms.IntegerField(\n+ time__since = forms.IntegerField(\n required=False,\n- label='Since Time',\n+ label='Time Since',\n help_text='Alerts younger than this number of seconds'\n )\n jd__gt = forms.FloatField(required=False, label='JD Lower')\n@@ -97,7 +97,7 @@\n self.common_layout,\n Fieldset(\n 'Time based filters',\n- 'since__time',\n+ 'time__since',\n Div(\n Div(\n 'time__gt',\n", "issue": "\"Since Time\" filter in MARS query\nUsing the MARS query, the results do not depend on the value chosen for the \"Since Time\" filter.\n", "before_files": [{"content": "import requests\nimport json\nfrom requests.exceptions import HTTPError\nfrom urllib.parse import urlencode\nfrom dateutil.parser import parse\nfrom django import forms\nfrom crispy_forms.layout import Layout, Div, Fieldset, HTML\nfrom astropy.time import Time, TimezoneInfo\n\nfrom tom_alerts.alerts import GenericQueryForm, GenericAlert\nfrom tom_targets.models import Target, TargetExtra\nfrom tom_dataproducts.models import ReducedDatum\n\nMARS_URL = 'https://mars.lco.global'\nfilters = {0: 'g', 1: 'r', 2: 'i'}\n\n\nclass MARSQueryForm(GenericQueryForm):\n time__gt = forms.CharField(\n required=False,\n label='Time Lower',\n widget=forms.TextInput(attrs={'type': 'date'})\n )\n time__lt = forms.CharField(\n required=False,\n label='Time Upper',\n widget=forms.TextInput(attrs={'type': 'date'})\n )\n since__time = forms.IntegerField(\n required=False,\n label='Since Time',\n help_text='Alerts younger than this number of seconds'\n )\n jd__gt = forms.FloatField(required=False, label='JD Lower')\n jd__lt = forms.FloatField(required=False, label='JD Upper')\n filter = forms.CharField(required=False)\n cone = forms.CharField(\n required=False,\n label='Cone Search',\n help_text='RA,Dec,radius in degrees'\n )\n objectcone = forms.CharField(\n required=False,\n label='Object Cone Search',\n help_text='Object name,radius in degrees'\n )\n objectidps = forms.CharField(\n required=False,\n label='Nearby Objects',\n help_text='Id from PS1 catalog'\n )\n ra__gt = forms.FloatField(required=False, label='RA Lower')\n ra__lt = forms.FloatField(required=False, label='RA Upper')\n dec__gt = forms.FloatField(required=False, label='Dec Lower')\n dec__lt = forms.FloatField(required=False, label='Dec Upper')\n l__gt = forms.FloatField(required=False, label='l Lower')\n l__lt = forms.FloatField(required=False, label='l Upper')\n b__gt = forms.FloatField(required=False, label='b Lower')\n b__lt = forms.FloatField(required=False, label='b Upper')\n magpsf__gte = forms.FloatField(required=False, label='Magpsf Lower')\n magpsf__lte = forms.FloatField(required=False, label='Magpsf Upper')\n sigmapsf__lte = forms.FloatField(required=False, label='Sigmapsf Upper')\n magap__gte = forms.FloatField(required=False, label='Magap Lower')\n magap__lte = forms.FloatField(required=False, label='Magap Upper')\n distnr__gte = forms.FloatField(required=False, label='Distnr Lower')\n distnr__lte = forms.FloatField(required=False, label='Distnr Upper')\n deltamaglatest__gte = forms.FloatField(\n required=False,\n label='Delta Mag Lower'\n )\n deltamaglatest__lte = forms.FloatField(\n required=False,\n label='Delta Mag Upper'\n )\n deltamagref__gte = forms.FloatField(\n required=False,\n label='Delta Mag Ref Lower'\n )\n deltamagref__lte = forms.FloatField(\n required=False,\n label='Delta Mag Ref Upper'\n )\n rb__gte = forms.FloatField(required=False, label='Real/Bogus Lower')\n classtar__gte = forms.FloatField(required=False, label='Classtar Lower')\n classtar__lte = forms.FloatField(required=False, label='Classtar Upper')\n fwhm__lte = forms.FloatField(required=False, label='FWHM Upper')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper.layout = Layout(\n HTML('''\n <p>\n Please see <a href=\"https://mars.lco.global/help\">MARS help</a>\n for a detailed description of available filters.\n </p>\n '''),\n self.common_layout,\n Fieldset(\n 'Time based filters',\n 'since__time',\n Div(\n Div(\n 'time__gt',\n 'jd__gt',\n css_class='col',\n ),\n Div(\n 'time__lt',\n 'jd__lt',\n css_class='col',\n ),\n css_class=\"form-row\",\n )\n ),\n Fieldset(\n 'Location based filters',\n 'cone',\n 'objectcone',\n 'objectidps',\n Div(\n Div(\n 'ra__gt',\n 'dec__gt',\n 'l__gt',\n 'b__gt',\n css_class='col',\n ),\n Div(\n 'ra__lt',\n 'dec__lt',\n 'l__lt',\n 'b__lt',\n css_class='col',\n ),\n css_class=\"form-row\",\n )\n ),\n Fieldset(\n 'Other Filters',\n Div(\n Div(\n 'magpsf__gte',\n 'magap__gte',\n 'distnr__gte',\n 'deltamaglatest__gte',\n 'deltamagref__gte',\n 'classtar__gte',\n css_class='col'\n ),\n Div(\n 'magpsf__lte',\n 'magap__lte',\n 'distnr__lte',\n 'deltamaglatest__lte',\n 'deltamagref__lte',\n 'classtar__lte',\n css_class='col'\n ),\n css_class='form-row',\n )\n ),\n 'filter',\n 'sigmapsf__lte',\n 'rb__gte',\n 'fwhm__lte'\n )\n\n\nclass MARSBroker(object):\n name = 'MARS'\n form = MARSQueryForm\n\n def _clean_parameters(self, parameters):\n return {k: v for k, v in parameters.items() if v and k != 'page'}\n\n def fetch_alerts(self, parameters):\n if not parameters.get('page'):\n parameters['page'] = 1\n args = urlencode(self._clean_parameters(parameters))\n url = '{0}/?page={1}&format=json&{2}'.format(\n MARS_URL,\n parameters['page'],\n args\n )\n alerts = []\n response = requests.get(url)\n response.raise_for_status()\n parsed = response.json()\n alerts = parsed['results']\n if parsed['has_next'] and parameters['page'] < 10:\n parameters['page'] += 1\n alerts += self.fetch_alerts(parameters)\n return alerts\n\n def fetch_alert(self, id):\n url = f'{MARS_URL}/{id}/?format=json'\n response = requests.get(url)\n response.raise_for_status()\n parsed = response.json()\n return parsed\n\n def process_reduced_data(self, target, alert=None):\n if not alert:\n try:\n target_datum = ReducedDatum.objects.filter(\n target=target,\n data_type='PHOTOMETRY',\n source_name=self.name).first()\n if not target_datum:\n return\n alert = self.fetch_alert(target_datum.source_location)\n except HTTPError:\n raise Exception('Unable to retrieve alert information from broker')\n for prv_candidate in alert.get('prv_candidate'):\n if all([key in prv_candidate['candidate'] for key in ['jd', 'magpsf', 'fid']]):\n jd = Time(prv_candidate['candidate']['jd'], format='jd', scale='utc')\n jd.to_datetime(timezone=TimezoneInfo())\n value = {\n 'magnitude': prv_candidate['candidate']['magpsf'],\n 'filter': filters[prv_candidate['candidate']['fid']]\n }\n rd, created = ReducedDatum.objects.get_or_create(\n timestamp=jd.to_datetime(timezone=TimezoneInfo()),\n value=json.dumps(value),\n source_name=self.name,\n source_location=alert['lco_id'],\n data_type='photometry',\n target=target)\n rd.save()\n\n def to_target(self, alert):\n alert_copy = alert.copy()\n target = Target.objects.create(\n identifier=alert_copy['objectId'],\n name=alert_copy['objectId'],\n type='SIDEREAL',\n ra=alert_copy['candidate'].pop('ra'),\n dec=alert_copy['candidate'].pop('dec'),\n galactic_lng=alert_copy['candidate'].pop('l'),\n galactic_lat=alert_copy['candidate'].pop('b'),\n )\n for k, v in alert_copy['candidate'].items():\n if v is not None:\n TargetExtra.objects.create(target=target, key=k, value=v)\n\n return target\n\n def to_generic_alert(self, alert):\n timestamp = parse(alert['candidate']['wall_time'])\n url = '{0}/{1}/'.format(MARS_URL, alert['lco_id'])\n\n return GenericAlert(\n timestamp=timestamp,\n url=url,\n id=alert['lco_id'],\n name=alert['objectId'],\n ra=alert['candidate']['ra'],\n dec=alert['candidate']['dec'],\n mag=alert['candidate']['magpsf'],\n score=alert['candidate']['rb']\n )\n", "path": "tom_alerts/brokers/mars.py"}], "after_files": [{"content": "import requests\nimport json\nfrom requests.exceptions import HTTPError\nfrom urllib.parse import urlencode\nfrom dateutil.parser import parse\nfrom django import forms\nfrom crispy_forms.layout import Layout, Div, Fieldset, HTML\nfrom astropy.time import Time, TimezoneInfo\n\nfrom tom_alerts.alerts import GenericQueryForm, GenericAlert\nfrom tom_targets.models import Target, TargetExtra\nfrom tom_dataproducts.models import ReducedDatum\n\nMARS_URL = 'https://mars.lco.global'\nfilters = {0: 'g', 1: 'r', 2: 'i'}\n\n\nclass MARSQueryForm(GenericQueryForm):\n time__gt = forms.CharField(\n required=False,\n label='Time Lower',\n widget=forms.TextInput(attrs={'type': 'date'})\n )\n time__lt = forms.CharField(\n required=False,\n label='Time Upper',\n widget=forms.TextInput(attrs={'type': 'date'})\n )\n time__since = forms.IntegerField(\n required=False,\n label='Time Since',\n help_text='Alerts younger than this number of seconds'\n )\n jd__gt = forms.FloatField(required=False, label='JD Lower')\n jd__lt = forms.FloatField(required=False, label='JD Upper')\n filter = forms.CharField(required=False)\n cone = forms.CharField(\n required=False,\n label='Cone Search',\n help_text='RA,Dec,radius in degrees'\n )\n objectcone = forms.CharField(\n required=False,\n label='Object Cone Search',\n help_text='Object name,radius in degrees'\n )\n objectidps = forms.CharField(\n required=False,\n label='Nearby Objects',\n help_text='Id from PS1 catalog'\n )\n ra__gt = forms.FloatField(required=False, label='RA Lower')\n ra__lt = forms.FloatField(required=False, label='RA Upper')\n dec__gt = forms.FloatField(required=False, label='Dec Lower')\n dec__lt = forms.FloatField(required=False, label='Dec Upper')\n l__gt = forms.FloatField(required=False, label='l Lower')\n l__lt = forms.FloatField(required=False, label='l Upper')\n b__gt = forms.FloatField(required=False, label='b Lower')\n b__lt = forms.FloatField(required=False, label='b Upper')\n magpsf__gte = forms.FloatField(required=False, label='Magpsf Lower')\n magpsf__lte = forms.FloatField(required=False, label='Magpsf Upper')\n sigmapsf__lte = forms.FloatField(required=False, label='Sigmapsf Upper')\n magap__gte = forms.FloatField(required=False, label='Magap Lower')\n magap__lte = forms.FloatField(required=False, label='Magap Upper')\n distnr__gte = forms.FloatField(required=False, label='Distnr Lower')\n distnr__lte = forms.FloatField(required=False, label='Distnr Upper')\n deltamaglatest__gte = forms.FloatField(\n required=False,\n label='Delta Mag Lower'\n )\n deltamaglatest__lte = forms.FloatField(\n required=False,\n label='Delta Mag Upper'\n )\n deltamagref__gte = forms.FloatField(\n required=False,\n label='Delta Mag Ref Lower'\n )\n deltamagref__lte = forms.FloatField(\n required=False,\n label='Delta Mag Ref Upper'\n )\n rb__gte = forms.FloatField(required=False, label='Real/Bogus Lower')\n classtar__gte = forms.FloatField(required=False, label='Classtar Lower')\n classtar__lte = forms.FloatField(required=False, label='Classtar Upper')\n fwhm__lte = forms.FloatField(required=False, label='FWHM Upper')\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper.layout = Layout(\n HTML('''\n <p>\n Please see <a href=\"https://mars.lco.global/help\">MARS help</a>\n for a detailed description of available filters.\n </p>\n '''),\n self.common_layout,\n Fieldset(\n 'Time based filters',\n 'time__since',\n Div(\n Div(\n 'time__gt',\n 'jd__gt',\n css_class='col',\n ),\n Div(\n 'time__lt',\n 'jd__lt',\n css_class='col',\n ),\n css_class=\"form-row\",\n )\n ),\n Fieldset(\n 'Location based filters',\n 'cone',\n 'objectcone',\n 'objectidps',\n Div(\n Div(\n 'ra__gt',\n 'dec__gt',\n 'l__gt',\n 'b__gt',\n css_class='col',\n ),\n Div(\n 'ra__lt',\n 'dec__lt',\n 'l__lt',\n 'b__lt',\n css_class='col',\n ),\n css_class=\"form-row\",\n )\n ),\n Fieldset(\n 'Other Filters',\n Div(\n Div(\n 'magpsf__gte',\n 'magap__gte',\n 'distnr__gte',\n 'deltamaglatest__gte',\n 'deltamagref__gte',\n 'classtar__gte',\n css_class='col'\n ),\n Div(\n 'magpsf__lte',\n 'magap__lte',\n 'distnr__lte',\n 'deltamaglatest__lte',\n 'deltamagref__lte',\n 'classtar__lte',\n css_class='col'\n ),\n css_class='form-row',\n )\n ),\n 'filter',\n 'sigmapsf__lte',\n 'rb__gte',\n 'fwhm__lte'\n )\n\n\nclass MARSBroker(object):\n name = 'MARS'\n form = MARSQueryForm\n\n def _clean_parameters(self, parameters):\n return {k: v for k, v in parameters.items() if v and k != 'page'}\n\n def fetch_alerts(self, parameters):\n if not parameters.get('page'):\n parameters['page'] = 1\n args = urlencode(self._clean_parameters(parameters))\n url = '{0}/?page={1}&format=json&{2}'.format(\n MARS_URL,\n parameters['page'],\n args\n )\n alerts = []\n response = requests.get(url)\n response.raise_for_status()\n parsed = response.json()\n alerts = parsed['results']\n if parsed['has_next'] and parameters['page'] < 10:\n parameters['page'] += 1\n alerts += self.fetch_alerts(parameters)\n return alerts\n\n def fetch_alert(self, id):\n url = f'{MARS_URL}/{id}/?format=json'\n response = requests.get(url)\n response.raise_for_status()\n parsed = response.json()\n return parsed\n\n def process_reduced_data(self, target, alert=None):\n if not alert:\n try:\n target_datum = ReducedDatum.objects.filter(\n target=target,\n data_type='PHOTOMETRY',\n source_name=self.name).first()\n if not target_datum:\n return\n alert = self.fetch_alert(target_datum.source_location)\n except HTTPError:\n raise Exception('Unable to retrieve alert information from broker')\n for prv_candidate in alert.get('prv_candidate'):\n if all([key in prv_candidate['candidate'] for key in ['jd', 'magpsf', 'fid']]):\n jd = Time(prv_candidate['candidate']['jd'], format='jd', scale='utc')\n jd.to_datetime(timezone=TimezoneInfo())\n value = {\n 'magnitude': prv_candidate['candidate']['magpsf'],\n 'filter': filters[prv_candidate['candidate']['fid']]\n }\n rd, created = ReducedDatum.objects.get_or_create(\n timestamp=jd.to_datetime(timezone=TimezoneInfo()),\n value=json.dumps(value),\n source_name=self.name,\n source_location=alert['lco_id'],\n data_type='photometry',\n target=target)\n rd.save()\n\n def to_target(self, alert):\n alert_copy = alert.copy()\n target = Target.objects.create(\n identifier=alert_copy['objectId'],\n name=alert_copy['objectId'],\n type='SIDEREAL',\n ra=alert_copy['candidate'].pop('ra'),\n dec=alert_copy['candidate'].pop('dec'),\n galactic_lng=alert_copy['candidate'].pop('l'),\n galactic_lat=alert_copy['candidate'].pop('b'),\n )\n for k, v in alert_copy['candidate'].items():\n if v is not None:\n TargetExtra.objects.create(target=target, key=k, value=v)\n\n return target\n\n def to_generic_alert(self, alert):\n timestamp = parse(alert['candidate']['wall_time'])\n url = '{0}/{1}/'.format(MARS_URL, alert['lco_id'])\n\n return GenericAlert(\n timestamp=timestamp,\n url=url,\n id=alert['lco_id'],\n name=alert['objectId'],\n ra=alert['candidate']['ra'],\n dec=alert['candidate']['dec'],\n mag=alert['candidate']['magpsf'],\n score=alert['candidate']['rb']\n )\n", "path": "tom_alerts/brokers/mars.py"}]} | 2,937 | 202 |
gh_patches_debug_16954 | rasdani/github-patches | git_diff | pypa__setuptools-597 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unfriendly error message when unicode is passed to package_dir or packages
Originally reported by: **jaraco (Bitbucket: [jaraco](http://bitbucket.org/jaraco), GitHub: [jaraco](http://github.com/jaraco))**
---
Adding `__future__.unicode_literals` causes setup.py scripts to fail. In the mailman project, here is an example error when running python2 setup.py build:
```
running build
running build_py
Traceback (most recent call last):
File "setup.py", line 112, in <module>
test_suite = 'nose2.collector.collector',
File "C:\Program Files\Python27\lib\distutils\core.py", line 152, in setup
dist.run_commands()
File "C:\Program Files\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Program Files\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Program Files\Python27\lib\distutils\command\build.py", line 127, in run
self.run_command(cmd_name)
File "C:\Program Files\Python27\lib\distutils\cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "C:\Program Files\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "build\bdist.win-amd64\egg\setuptools\command\build_py.py", line 42, in run
File "C:\Program Files\Python27\lib\distutils\command\build_py.py", line 372, in build_packages
self.build_module(module, module_file, package)
File "build\bdist.win-amd64\egg\setuptools\command\build_py.py", line 60, in build_module
File "C:\Program Files\Python27\lib\distutils\command\build_py.py", line 333, in build_module
"'package' must be a string (dot-separated), list, or tuple")
TypeError: 'package' must be a string (dot-separated), list, or tuple
```
A different error occurs when using 'develop':
```
running develop
error: 'egg_base' must be a directory name (got `src`)
```
Setuptools could make this error message nicer.
---
- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/190
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/command/build_py.py`
Content:
```
1 from glob import glob
2 from distutils.util import convert_path
3 import distutils.command.build_py as orig
4 import os
5 import fnmatch
6 import textwrap
7 import io
8 import distutils.errors
9 import itertools
10
11 from setuptools.extern.six.moves import map, filter, filterfalse
12
13 try:
14 from setuptools.lib2to3_ex import Mixin2to3
15 except ImportError:
16 class Mixin2to3:
17 def run_2to3(self, files, doctests=True):
18 "do nothing"
19
20
21 class build_py(orig.build_py, Mixin2to3):
22 """Enhanced 'build_py' command that includes data files with packages
23
24 The data files are specified via a 'package_data' argument to 'setup()'.
25 See 'setuptools.dist.Distribution' for more details.
26
27 Also, this version of the 'build_py' command allows you to specify both
28 'py_modules' and 'packages' in the same setup operation.
29 """
30
31 def finalize_options(self):
32 orig.build_py.finalize_options(self)
33 self.package_data = self.distribution.package_data
34 self.exclude_package_data = (self.distribution.exclude_package_data or
35 {})
36 if 'data_files' in self.__dict__:
37 del self.__dict__['data_files']
38 self.__updated_files = []
39 self.__doctests_2to3 = []
40
41 def run(self):
42 """Build modules, packages, and copy data files to build directory"""
43 if not self.py_modules and not self.packages:
44 return
45
46 if self.py_modules:
47 self.build_modules()
48
49 if self.packages:
50 self.build_packages()
51 self.build_package_data()
52
53 self.run_2to3(self.__updated_files, False)
54 self.run_2to3(self.__updated_files, True)
55 self.run_2to3(self.__doctests_2to3, True)
56
57 # Only compile actual .py files, using our base class' idea of what our
58 # output files are.
59 self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))
60
61 def __getattr__(self, attr):
62 "lazily compute data files"
63 if attr == 'data_files':
64 self.data_files = self._get_data_files()
65 return self.data_files
66 return orig.build_py.__getattr__(self, attr)
67
68 def build_module(self, module, module_file, package):
69 outfile, copied = orig.build_py.build_module(self, module, module_file,
70 package)
71 if copied:
72 self.__updated_files.append(outfile)
73 return outfile, copied
74
75 def _get_data_files(self):
76 """Generate list of '(package,src_dir,build_dir,filenames)' tuples"""
77 self.analyze_manifest()
78 return list(map(self._get_pkg_data_files, self.packages or ()))
79
80 def _get_pkg_data_files(self, package):
81 # Locate package source directory
82 src_dir = self.get_package_dir(package)
83
84 # Compute package build directory
85 build_dir = os.path.join(*([self.build_lib] + package.split('.')))
86
87 # Strip directory from globbed filenames
88 filenames = [
89 os.path.relpath(file, src_dir)
90 for file in self.find_data_files(package, src_dir)
91 ]
92 return package, src_dir, build_dir, filenames
93
94 def find_data_files(self, package, src_dir):
95 """Return filenames for package's data files in 'src_dir'"""
96 patterns = self._get_platform_patterns(
97 self.package_data,
98 package,
99 src_dir,
100 )
101 globs_expanded = map(glob, patterns)
102 # flatten the expanded globs into an iterable of matches
103 globs_matches = itertools.chain.from_iterable(globs_expanded)
104 glob_files = filter(os.path.isfile, globs_matches)
105 files = itertools.chain(
106 self.manifest_files.get(package, []),
107 glob_files,
108 )
109 return self.exclude_data_files(package, src_dir, files)
110
111 def build_package_data(self):
112 """Copy data files into build directory"""
113 for package, src_dir, build_dir, filenames in self.data_files:
114 for filename in filenames:
115 target = os.path.join(build_dir, filename)
116 self.mkpath(os.path.dirname(target))
117 srcfile = os.path.join(src_dir, filename)
118 outf, copied = self.copy_file(srcfile, target)
119 srcfile = os.path.abspath(srcfile)
120 if (copied and
121 srcfile in self.distribution.convert_2to3_doctests):
122 self.__doctests_2to3.append(outf)
123
124 def analyze_manifest(self):
125 self.manifest_files = mf = {}
126 if not self.distribution.include_package_data:
127 return
128 src_dirs = {}
129 for package in self.packages or ():
130 # Locate package source directory
131 src_dirs[assert_relative(self.get_package_dir(package))] = package
132
133 self.run_command('egg_info')
134 ei_cmd = self.get_finalized_command('egg_info')
135 for path in ei_cmd.filelist.files:
136 d, f = os.path.split(assert_relative(path))
137 prev = None
138 oldf = f
139 while d and d != prev and d not in src_dirs:
140 prev = d
141 d, df = os.path.split(d)
142 f = os.path.join(df, f)
143 if d in src_dirs:
144 if path.endswith('.py') and f == oldf:
145 continue # it's a module, not data
146 mf.setdefault(src_dirs[d], []).append(path)
147
148 def get_data_files(self):
149 pass # Lazily compute data files in _get_data_files() function.
150
151 def check_package(self, package, package_dir):
152 """Check namespace packages' __init__ for declare_namespace"""
153 try:
154 return self.packages_checked[package]
155 except KeyError:
156 pass
157
158 init_py = orig.build_py.check_package(self, package, package_dir)
159 self.packages_checked[package] = init_py
160
161 if not init_py or not self.distribution.namespace_packages:
162 return init_py
163
164 for pkg in self.distribution.namespace_packages:
165 if pkg == package or pkg.startswith(package + '.'):
166 break
167 else:
168 return init_py
169
170 with io.open(init_py, 'rb') as f:
171 contents = f.read()
172 if b'declare_namespace' not in contents:
173 raise distutils.errors.DistutilsError(
174 "Namespace package problem: %s is a namespace package, but "
175 "its\n__init__.py does not call declare_namespace()! Please "
176 'fix it.\n(See the setuptools manual under '
177 '"Namespace Packages" for details.)\n"' % (package,)
178 )
179 return init_py
180
181 def initialize_options(self):
182 self.packages_checked = {}
183 orig.build_py.initialize_options(self)
184
185 def get_package_dir(self, package):
186 res = orig.build_py.get_package_dir(self, package)
187 if self.distribution.src_root is not None:
188 return os.path.join(self.distribution.src_root, res)
189 return res
190
191 def exclude_data_files(self, package, src_dir, files):
192 """Filter filenames for package's data files in 'src_dir'"""
193 files = list(files)
194 patterns = self._get_platform_patterns(
195 self.exclude_package_data,
196 package,
197 src_dir,
198 )
199 match_groups = (
200 fnmatch.filter(files, pattern)
201 for pattern in patterns
202 )
203 # flatten the groups of matches into an iterable of matches
204 matches = itertools.chain.from_iterable(match_groups)
205 bad = set(matches)
206 keepers = (
207 fn
208 for fn in files
209 if fn not in bad
210 )
211 # ditch dupes
212 return list(_unique_everseen(keepers))
213
214 @staticmethod
215 def _get_platform_patterns(spec, package, src_dir):
216 """
217 yield platfrom-specific path patterns (suitable for glob
218 or fn_match) from a glob-based spec (such as
219 self.package_data or self.exclude_package_data)
220 matching package in src_dir.
221 """
222 raw_patterns = itertools.chain(
223 spec.get('', []),
224 spec.get(package, []),
225 )
226 return (
227 # Each pattern has to be converted to a platform-specific path
228 os.path.join(src_dir, convert_path(pattern))
229 for pattern in raw_patterns
230 )
231
232
233 # from Python docs
234 def _unique_everseen(iterable, key=None):
235 "List unique elements, preserving order. Remember all elements ever seen."
236 # unique_everseen('AAAABBBCCDAABBB') --> A B C D
237 # unique_everseen('ABBCcAD', str.lower) --> A B C D
238 seen = set()
239 seen_add = seen.add
240 if key is None:
241 for element in filterfalse(seen.__contains__, iterable):
242 seen_add(element)
243 yield element
244 else:
245 for element in iterable:
246 k = key(element)
247 if k not in seen:
248 seen_add(k)
249 yield element
250
251
252 def assert_relative(path):
253 if not os.path.isabs(path):
254 return path
255 from distutils.errors import DistutilsSetupError
256
257 msg = textwrap.dedent("""
258 Error: setup script specifies an absolute path:
259
260 %s
261
262 setup() arguments must *always* be /-separated paths relative to the
263 setup.py directory, *never* absolute paths.
264 """).lstrip() % path
265 raise DistutilsSetupError(msg)
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/command/build_py.py b/setuptools/command/build_py.py
--- a/setuptools/command/build_py.py
+++ b/setuptools/command/build_py.py
@@ -8,6 +8,7 @@
import distutils.errors
import itertools
+from setuptools.extern import six
from setuptools.extern.six.moves import map, filter, filterfalse
try:
@@ -66,6 +67,9 @@
return orig.build_py.__getattr__(self, attr)
def build_module(self, module, module_file, package):
+ if six.PY2 and isinstance(package, six.string_types):
+ # avoid errors on Python 2 when unicode is passed (#190)
+ package = package.split('.')
outfile, copied = orig.build_py.build_module(self, module, module_file,
package)
if copied:
| {"golden_diff": "diff --git a/setuptools/command/build_py.py b/setuptools/command/build_py.py\n--- a/setuptools/command/build_py.py\n+++ b/setuptools/command/build_py.py\n@@ -8,6 +8,7 @@\n import distutils.errors\n import itertools\n \n+from setuptools.extern import six\n from setuptools.extern.six.moves import map, filter, filterfalse\n \n try:\n@@ -66,6 +67,9 @@\n return orig.build_py.__getattr__(self, attr)\n \n def build_module(self, module, module_file, package):\n+ if six.PY2 and isinstance(package, six.string_types):\n+ # avoid errors on Python 2 when unicode is passed (#190)\n+ package = package.split('.')\n outfile, copied = orig.build_py.build_module(self, module, module_file,\n package)\n if copied:\n", "issue": "Unfriendly error message when unicode is passed to package_dir or packages\nOriginally reported by: **jaraco (Bitbucket: [jaraco](http://bitbucket.org/jaraco), GitHub: [jaraco](http://github.com/jaraco))**\n\n---\n\nAdding `__future__.unicode_literals` causes setup.py scripts to fail. In the mailman project, here is an example error when running python2 setup.py build:\n\n```\nrunning build\nrunning build_py\nTraceback (most recent call last):\n File \"setup.py\", line 112, in <module>\n test_suite = 'nose2.collector.collector',\n File \"C:\\Program Files\\Python27\\lib\\distutils\\core.py\", line 152, in setup\n dist.run_commands()\n File \"C:\\Program Files\\Python27\\lib\\distutils\\dist.py\", line 953, in run_commands\n self.run_command(cmd)\n File \"C:\\Program Files\\Python27\\lib\\distutils\\dist.py\", line 972, in run_command\n cmd_obj.run()\n File \"C:\\Program Files\\Python27\\lib\\distutils\\command\\build.py\", line 127, in run\n self.run_command(cmd_name)\n File \"C:\\Program Files\\Python27\\lib\\distutils\\cmd.py\", line 326, in run_command\n self.distribution.run_command(command)\n File \"C:\\Program Files\\Python27\\lib\\distutils\\dist.py\", line 972, in run_command\n cmd_obj.run()\n File \"build\\bdist.win-amd64\\egg\\setuptools\\command\\build_py.py\", line 42, in run\n File \"C:\\Program Files\\Python27\\lib\\distutils\\command\\build_py.py\", line 372, in build_packages\n self.build_module(module, module_file, package)\n File \"build\\bdist.win-amd64\\egg\\setuptools\\command\\build_py.py\", line 60, in build_module\n File \"C:\\Program Files\\Python27\\lib\\distutils\\command\\build_py.py\", line 333, in build_module\n \"'package' must be a string (dot-separated), list, or tuple\")\nTypeError: 'package' must be a string (dot-separated), list, or tuple\n```\n\nA different error occurs when using 'develop':\n\n```\nrunning develop\nerror: 'egg_base' must be a directory name (got `src`)\n```\n\nSetuptools could make this error message nicer.\n\n---\n- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/190\n\n", "before_files": [{"content": "from glob import glob\nfrom distutils.util import convert_path\nimport distutils.command.build_py as orig\nimport os\nimport fnmatch\nimport textwrap\nimport io\nimport distutils.errors\nimport itertools\n\nfrom setuptools.extern.six.moves import map, filter, filterfalse\n\ntry:\n from setuptools.lib2to3_ex import Mixin2to3\nexcept ImportError:\n class Mixin2to3:\n def run_2to3(self, files, doctests=True):\n \"do nothing\"\n\n\nclass build_py(orig.build_py, Mixin2to3):\n \"\"\"Enhanced 'build_py' command that includes data files with packages\n\n The data files are specified via a 'package_data' argument to 'setup()'.\n See 'setuptools.dist.Distribution' for more details.\n\n Also, this version of the 'build_py' command allows you to specify both\n 'py_modules' and 'packages' in the same setup operation.\n \"\"\"\n\n def finalize_options(self):\n orig.build_py.finalize_options(self)\n self.package_data = self.distribution.package_data\n self.exclude_package_data = (self.distribution.exclude_package_data or\n {})\n if 'data_files' in self.__dict__:\n del self.__dict__['data_files']\n self.__updated_files = []\n self.__doctests_2to3 = []\n\n def run(self):\n \"\"\"Build modules, packages, and copy data files to build directory\"\"\"\n if not self.py_modules and not self.packages:\n return\n\n if self.py_modules:\n self.build_modules()\n\n if self.packages:\n self.build_packages()\n self.build_package_data()\n\n self.run_2to3(self.__updated_files, False)\n self.run_2to3(self.__updated_files, True)\n self.run_2to3(self.__doctests_2to3, True)\n\n # Only compile actual .py files, using our base class' idea of what our\n # output files are.\n self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))\n\n def __getattr__(self, attr):\n \"lazily compute data files\"\n if attr == 'data_files':\n self.data_files = self._get_data_files()\n return self.data_files\n return orig.build_py.__getattr__(self, attr)\n\n def build_module(self, module, module_file, package):\n outfile, copied = orig.build_py.build_module(self, module, module_file,\n package)\n if copied:\n self.__updated_files.append(outfile)\n return outfile, copied\n\n def _get_data_files(self):\n \"\"\"Generate list of '(package,src_dir,build_dir,filenames)' tuples\"\"\"\n self.analyze_manifest()\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def _get_pkg_data_files(self, package):\n # Locate package source directory\n src_dir = self.get_package_dir(package)\n\n # Compute package build directory\n build_dir = os.path.join(*([self.build_lib] + package.split('.')))\n\n # Strip directory from globbed filenames\n filenames = [\n os.path.relpath(file, src_dir)\n for file in self.find_data_files(package, src_dir)\n ]\n return package, src_dir, build_dir, filenames\n\n def find_data_files(self, package, src_dir):\n \"\"\"Return filenames for package's data files in 'src_dir'\"\"\"\n patterns = self._get_platform_patterns(\n self.package_data,\n package,\n src_dir,\n )\n globs_expanded = map(glob, patterns)\n # flatten the expanded globs into an iterable of matches\n globs_matches = itertools.chain.from_iterable(globs_expanded)\n glob_files = filter(os.path.isfile, globs_matches)\n files = itertools.chain(\n self.manifest_files.get(package, []),\n glob_files,\n )\n return self.exclude_data_files(package, src_dir, files)\n\n def build_package_data(self):\n \"\"\"Copy data files into build directory\"\"\"\n for package, src_dir, build_dir, filenames in self.data_files:\n for filename in filenames:\n target = os.path.join(build_dir, filename)\n self.mkpath(os.path.dirname(target))\n srcfile = os.path.join(src_dir, filename)\n outf, copied = self.copy_file(srcfile, target)\n srcfile = os.path.abspath(srcfile)\n if (copied and\n srcfile in self.distribution.convert_2to3_doctests):\n self.__doctests_2to3.append(outf)\n\n def analyze_manifest(self):\n self.manifest_files = mf = {}\n if not self.distribution.include_package_data:\n return\n src_dirs = {}\n for package in self.packages or ():\n # Locate package source directory\n src_dirs[assert_relative(self.get_package_dir(package))] = package\n\n self.run_command('egg_info')\n ei_cmd = self.get_finalized_command('egg_info')\n for path in ei_cmd.filelist.files:\n d, f = os.path.split(assert_relative(path))\n prev = None\n oldf = f\n while d and d != prev and d not in src_dirs:\n prev = d\n d, df = os.path.split(d)\n f = os.path.join(df, f)\n if d in src_dirs:\n if path.endswith('.py') and f == oldf:\n continue # it's a module, not data\n mf.setdefault(src_dirs[d], []).append(path)\n\n def get_data_files(self):\n pass # Lazily compute data files in _get_data_files() function.\n\n def check_package(self, package, package_dir):\n \"\"\"Check namespace packages' __init__ for declare_namespace\"\"\"\n try:\n return self.packages_checked[package]\n except KeyError:\n pass\n\n init_py = orig.build_py.check_package(self, package, package_dir)\n self.packages_checked[package] = init_py\n\n if not init_py or not self.distribution.namespace_packages:\n return init_py\n\n for pkg in self.distribution.namespace_packages:\n if pkg == package or pkg.startswith(package + '.'):\n break\n else:\n return init_py\n\n with io.open(init_py, 'rb') as f:\n contents = f.read()\n if b'declare_namespace' not in contents:\n raise distutils.errors.DistutilsError(\n \"Namespace package problem: %s is a namespace package, but \"\n \"its\\n__init__.py does not call declare_namespace()! Please \"\n 'fix it.\\n(See the setuptools manual under '\n '\"Namespace Packages\" for details.)\\n\"' % (package,)\n )\n return init_py\n\n def initialize_options(self):\n self.packages_checked = {}\n orig.build_py.initialize_options(self)\n\n def get_package_dir(self, package):\n res = orig.build_py.get_package_dir(self, package)\n if self.distribution.src_root is not None:\n return os.path.join(self.distribution.src_root, res)\n return res\n\n def exclude_data_files(self, package, src_dir, files):\n \"\"\"Filter filenames for package's data files in 'src_dir'\"\"\"\n files = list(files)\n patterns = self._get_platform_patterns(\n self.exclude_package_data,\n package,\n src_dir,\n )\n match_groups = (\n fnmatch.filter(files, pattern)\n for pattern in patterns\n )\n # flatten the groups of matches into an iterable of matches\n matches = itertools.chain.from_iterable(match_groups)\n bad = set(matches)\n keepers = (\n fn\n for fn in files\n if fn not in bad\n )\n # ditch dupes\n return list(_unique_everseen(keepers))\n\n @staticmethod\n def _get_platform_patterns(spec, package, src_dir):\n \"\"\"\n yield platfrom-specific path patterns (suitable for glob\n or fn_match) from a glob-based spec (such as\n self.package_data or self.exclude_package_data)\n matching package in src_dir.\n \"\"\"\n raw_patterns = itertools.chain(\n spec.get('', []),\n spec.get(package, []),\n )\n return (\n # Each pattern has to be converted to a platform-specific path\n os.path.join(src_dir, convert_path(pattern))\n for pattern in raw_patterns\n )\n\n\n# from Python docs\ndef _unique_everseen(iterable, key=None):\n \"List unique elements, preserving order. Remember all elements ever seen.\"\n # unique_everseen('AAAABBBCCDAABBB') --> A B C D\n # unique_everseen('ABBCcAD', str.lower) --> A B C D\n seen = set()\n seen_add = seen.add\n if key is None:\n for element in filterfalse(seen.__contains__, iterable):\n seen_add(element)\n yield element\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n seen_add(k)\n yield element\n\n\ndef assert_relative(path):\n if not os.path.isabs(path):\n return path\n from distutils.errors import DistutilsSetupError\n\n msg = textwrap.dedent(\"\"\"\n Error: setup script specifies an absolute path:\n\n %s\n\n setup() arguments must *always* be /-separated paths relative to the\n setup.py directory, *never* absolute paths.\n \"\"\").lstrip() % path\n raise DistutilsSetupError(msg)\n", "path": "setuptools/command/build_py.py"}], "after_files": [{"content": "from glob import glob\nfrom distutils.util import convert_path\nimport distutils.command.build_py as orig\nimport os\nimport fnmatch\nimport textwrap\nimport io\nimport distutils.errors\nimport itertools\n\nfrom setuptools.extern import six\nfrom setuptools.extern.six.moves import map, filter, filterfalse\n\ntry:\n from setuptools.lib2to3_ex import Mixin2to3\nexcept ImportError:\n class Mixin2to3:\n def run_2to3(self, files, doctests=True):\n \"do nothing\"\n\n\nclass build_py(orig.build_py, Mixin2to3):\n \"\"\"Enhanced 'build_py' command that includes data files with packages\n\n The data files are specified via a 'package_data' argument to 'setup()'.\n See 'setuptools.dist.Distribution' for more details.\n\n Also, this version of the 'build_py' command allows you to specify both\n 'py_modules' and 'packages' in the same setup operation.\n \"\"\"\n\n def finalize_options(self):\n orig.build_py.finalize_options(self)\n self.package_data = self.distribution.package_data\n self.exclude_package_data = (self.distribution.exclude_package_data or\n {})\n if 'data_files' in self.__dict__:\n del self.__dict__['data_files']\n self.__updated_files = []\n self.__doctests_2to3 = []\n\n def run(self):\n \"\"\"Build modules, packages, and copy data files to build directory\"\"\"\n if not self.py_modules and not self.packages:\n return\n\n if self.py_modules:\n self.build_modules()\n\n if self.packages:\n self.build_packages()\n self.build_package_data()\n\n self.run_2to3(self.__updated_files, False)\n self.run_2to3(self.__updated_files, True)\n self.run_2to3(self.__doctests_2to3, True)\n\n # Only compile actual .py files, using our base class' idea of what our\n # output files are.\n self.byte_compile(orig.build_py.get_outputs(self, include_bytecode=0))\n\n def __getattr__(self, attr):\n \"lazily compute data files\"\n if attr == 'data_files':\n self.data_files = self._get_data_files()\n return self.data_files\n return orig.build_py.__getattr__(self, attr)\n\n def build_module(self, module, module_file, package):\n if six.PY2 and isinstance(package, six.string_types):\n # avoid errors on Python 2 when unicode is passed (#190)\n package = package.split('.')\n outfile, copied = orig.build_py.build_module(self, module, module_file,\n package)\n if copied:\n self.__updated_files.append(outfile)\n return outfile, copied\n\n def _get_data_files(self):\n \"\"\"Generate list of '(package,src_dir,build_dir,filenames)' tuples\"\"\"\n self.analyze_manifest()\n return list(map(self._get_pkg_data_files, self.packages or ()))\n\n def _get_pkg_data_files(self, package):\n # Locate package source directory\n src_dir = self.get_package_dir(package)\n\n # Compute package build directory\n build_dir = os.path.join(*([self.build_lib] + package.split('.')))\n\n # Strip directory from globbed filenames\n filenames = [\n os.path.relpath(file, src_dir)\n for file in self.find_data_files(package, src_dir)\n ]\n return package, src_dir, build_dir, filenames\n\n def find_data_files(self, package, src_dir):\n \"\"\"Return filenames for package's data files in 'src_dir'\"\"\"\n patterns = self._get_platform_patterns(\n self.package_data,\n package,\n src_dir,\n )\n globs_expanded = map(glob, patterns)\n # flatten the expanded globs into an iterable of matches\n globs_matches = itertools.chain.from_iterable(globs_expanded)\n glob_files = filter(os.path.isfile, globs_matches)\n files = itertools.chain(\n self.manifest_files.get(package, []),\n glob_files,\n )\n return self.exclude_data_files(package, src_dir, files)\n\n def build_package_data(self):\n \"\"\"Copy data files into build directory\"\"\"\n for package, src_dir, build_dir, filenames in self.data_files:\n for filename in filenames:\n target = os.path.join(build_dir, filename)\n self.mkpath(os.path.dirname(target))\n srcfile = os.path.join(src_dir, filename)\n outf, copied = self.copy_file(srcfile, target)\n srcfile = os.path.abspath(srcfile)\n if (copied and\n srcfile in self.distribution.convert_2to3_doctests):\n self.__doctests_2to3.append(outf)\n\n def analyze_manifest(self):\n self.manifest_files = mf = {}\n if not self.distribution.include_package_data:\n return\n src_dirs = {}\n for package in self.packages or ():\n # Locate package source directory\n src_dirs[assert_relative(self.get_package_dir(package))] = package\n\n self.run_command('egg_info')\n ei_cmd = self.get_finalized_command('egg_info')\n for path in ei_cmd.filelist.files:\n d, f = os.path.split(assert_relative(path))\n prev = None\n oldf = f\n while d and d != prev and d not in src_dirs:\n prev = d\n d, df = os.path.split(d)\n f = os.path.join(df, f)\n if d in src_dirs:\n if path.endswith('.py') and f == oldf:\n continue # it's a module, not data\n mf.setdefault(src_dirs[d], []).append(path)\n\n def get_data_files(self):\n pass # Lazily compute data files in _get_data_files() function.\n\n def check_package(self, package, package_dir):\n \"\"\"Check namespace packages' __init__ for declare_namespace\"\"\"\n try:\n return self.packages_checked[package]\n except KeyError:\n pass\n\n init_py = orig.build_py.check_package(self, package, package_dir)\n self.packages_checked[package] = init_py\n\n if not init_py or not self.distribution.namespace_packages:\n return init_py\n\n for pkg in self.distribution.namespace_packages:\n if pkg == package or pkg.startswith(package + '.'):\n break\n else:\n return init_py\n\n with io.open(init_py, 'rb') as f:\n contents = f.read()\n if b'declare_namespace' not in contents:\n raise distutils.errors.DistutilsError(\n \"Namespace package problem: %s is a namespace package, but \"\n \"its\\n__init__.py does not call declare_namespace()! Please \"\n 'fix it.\\n(See the setuptools manual under '\n '\"Namespace Packages\" for details.)\\n\"' % (package,)\n )\n return init_py\n\n def initialize_options(self):\n self.packages_checked = {}\n orig.build_py.initialize_options(self)\n\n def get_package_dir(self, package):\n res = orig.build_py.get_package_dir(self, package)\n if self.distribution.src_root is not None:\n return os.path.join(self.distribution.src_root, res)\n return res\n\n def exclude_data_files(self, package, src_dir, files):\n \"\"\"Filter filenames for package's data files in 'src_dir'\"\"\"\n files = list(files)\n patterns = self._get_platform_patterns(\n self.exclude_package_data,\n package,\n src_dir,\n )\n match_groups = (\n fnmatch.filter(files, pattern)\n for pattern in patterns\n )\n # flatten the groups of matches into an iterable of matches\n matches = itertools.chain.from_iterable(match_groups)\n bad = set(matches)\n keepers = (\n fn\n for fn in files\n if fn not in bad\n )\n # ditch dupes\n return list(_unique_everseen(keepers))\n\n @staticmethod\n def _get_platform_patterns(spec, package, src_dir):\n \"\"\"\n yield platfrom-specific path patterns (suitable for glob\n or fn_match) from a glob-based spec (such as\n self.package_data or self.exclude_package_data)\n matching package in src_dir.\n \"\"\"\n raw_patterns = itertools.chain(\n spec.get('', []),\n spec.get(package, []),\n )\n return (\n # Each pattern has to be converted to a platform-specific path\n os.path.join(src_dir, convert_path(pattern))\n for pattern in raw_patterns\n )\n\n\n# from Python docs\ndef _unique_everseen(iterable, key=None):\n \"List unique elements, preserving order. Remember all elements ever seen.\"\n # unique_everseen('AAAABBBCCDAABBB') --> A B C D\n # unique_everseen('ABBCcAD', str.lower) --> A B C D\n seen = set()\n seen_add = seen.add\n if key is None:\n for element in filterfalse(seen.__contains__, iterable):\n seen_add(element)\n yield element\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n seen_add(k)\n yield element\n\n\ndef assert_relative(path):\n if not os.path.isabs(path):\n return path\n from distutils.errors import DistutilsSetupError\n\n msg = textwrap.dedent(\"\"\"\n Error: setup script specifies an absolute path:\n\n %s\n\n setup() arguments must *always* be /-separated paths relative to the\n setup.py directory, *never* absolute paths.\n \"\"\").lstrip() % path\n raise DistutilsSetupError(msg)\n", "path": "setuptools/command/build_py.py"}]} | 3,585 | 179 |
gh_patches_debug_36503 | rasdani/github-patches | git_diff | bokeh__bokeh-9870 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Minor typos in validation error output for FactorRange and CDSView
**Software version info**
bokeh 2.0.0
Mozilla Firefox 74.0
macOS Catalina 10.15.3
**Expected behavior:**
Validation errors will have an output with correct spelling.
Example one:
"FactorRange must specify a unique list of categorical factors for an axis"
Example two:
"CDSView filters are not compatible with glyphs with connected topology such as Line or Patch"
**Observed behavior:**
Validation errors have an output with typos (see attached screenshot).
Example one:
"FactorRange must **_specicy_** a unique list of categorical factors for an axis"
Example two:
"CDSView filters are not compatible with glyphs with connected topology **_suchs_** as Line **_and_** Patch"
**Complete, minimal, self-contained example code that reproduces the issue**
**Example one:**
```py
from bokeh.io import output_notebook, show
from bokeh.models import ColumnDataSource, FactorRange
from bokeh.plotting import figure
output_notebook()
fruits = ['Apples', 'Apples']
years = ['2015', '2016']
data = {'fruits' : fruits,
'2015' : [2, 1],
'2016' : [5, 3]}
x = [ (fruit, year) for fruit in fruits for year in years ]
counts = sum(zip(data['2015'], data['2016']), ()) # like an hstack
source = ColumnDataSource(data=dict(x=x, counts=counts))
p = figure(x_range=FactorRange(*x), plot_height=250, title="Fruit Counts by Year",
toolbar_location=None, tools="")
p.vbar(x='x', top='counts', width=0.9, source=source)
show(p)
```
**Example two**
```py
from bokeh.layouts import row
from bokeh.plotting import output_notebook, figure, show
from bokeh.sampledata.autompg import autompg
from bokeh.models import CDSView, GroupFilter
from bokeh.models.sources import ColumnDataSource
output_notebook()
autompg = autompg.assign(efficient=(autompg.mpg > 20).astype(str))
autompg = autompg.groupby(['yr', 'efficient']).mpg.mean().reset_index()
autompg = autompg.sort_values(['efficient', 'yr'])
list_p = []
source=ColumnDataSource(autompg)
for eff in ['True', 'False']:
filter_ = GroupFilter(column_name='efficient', group=eff)
view = CDSView(source=source, filters=[filter_])
list_p.append(figure(title=eff))
list_p[-1].line(x='yr', y='mpg', source=source, view=view)
show(row(list_p))
```
**Screenshots or screencasts of the bug in action**
Example one:
<img width="1052" alt="Screen Shot 2020-04-01 at 11 12 20 PM" src="https://user-images.githubusercontent.com/18173173/78207887-f42d7380-7470-11ea-853a-f7aa905f91d2.png">
Example two:
<img width="1036" alt="Screen Shot 2020-04-01 at 11 24 33 PM" src="https://user-images.githubusercontent.com/18173173/78207888-f68fcd80-7470-11ea-9370-c048914aa6d8.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/core/validation/errors.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' These define the standard error codes and messages for Bokeh
8 validation checks.
9
10 1001 *(BAD_COLUMN_NAME)*
11 A glyph has a property set to a field name that does not correspond to any
12 column in the |GlyphRenderer|'s data source.
13
14 1002 *(MISSING_GLYPH)*
15 A |GlyphRenderer| has no glyph configured.
16
17 1003 *(NO_SOURCE_FOR_GLYPH)*
18 A |GlyphRenderer| has no data source configured.
19
20 1004 *(REQUIRED_RANGE)*
21 A |Plot| is missing one or more required default ranges (will result in
22 blank plot).
23
24 1005 *(MISSING_GOOGLE_API_KEY)*
25 Google Maps API now requires an API key for all use. See
26 https://developers.google.com/maps/documentation/javascript/get-api-key
27 for more information on how to obtain your own, to use for the
28 ``api_key`` property of your Google Map plot .
29
30 1006 *(NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS)*
31 All data_sources on ``LegendItem.renderers`` must match when LegendItem.label
32 is type field.
33
34 1007 *(MISSING_MERCATOR_DIMENSION)*
35 ``MercatorTicker`` and ``MercatorTickFormatter``models must have their
36 ``dimension`` property set to ``'lat'`` or ``'lon'``.
37
38 1008 *(REQUIRED_SCALE)*
39 A |Scale| on is missing one or more required default scales (will result in
40 blank plot).
41
42 1009 *(INCOMPATIBLE_SCALE_AND_RANGE)*
43 A |Scale| type is incompatible with one or more ranges on the same plot
44 dimension (will result in blank plot).
45
46 1010 *(CDSVIEW_SOURCE_DOESNT_MATCH)*
47 A |GlyphRenderer| has a ``CDSView`` whose source doesn't match the ``GlyphRenderer``'s
48 data source.
49
50 1011 *(MALFORMED_GRAPH_SOURCE)*
51 The ``GraphSource`` is incorrectly configured.
52
53 1012 *(INCOMPATIBLE_MAP_RANGE_TYPE)*
54 Map plots can only support ``Range1d`` types, not data ranges.
55
56 1013 *(INCOMPATIBLE_POINT_DRAW_RENDERER)*
57 The ``PointDrawTool`` renderers may only reference ``XYGlyph`` models.
58
59 1014 *(INCOMPATIBLE_BOX_EDIT_RENDERER)*
60 The ``BoxEditTool`` renderers may only reference ``Rect`` glyph models.
61
62 1015 *(INCOMPATIBLE_POLY_DRAW_RENDERER)*
63 The ``PolyDrawTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.
64
65 1016 *(INCOMPATIBLE_POLY_EDIT_RENDERER)*
66 The ``PolyEditTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.
67
68 1017 *(INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER)*
69 The ``PolyEditTool`` vertex_renderer may only reference ``XYGlyph`` models.
70
71 1018 *(NO_RANGE_TOOL_RANGES)*
72 The ``RangeTool`` must have at least one of ``x_range`` or ``y_range`` configured
73
74 1019 *(DUPLICATE_FACTORS)*
75 ``FactorRange`` must specify a unique list of categorical factors for an axis.
76
77 1020 *(BAD_EXTRA_RANGE_NAME)*
78 An extra range name is configured with a name that does not correspond to any range.
79
80 1021 *(EQUAL_SLIDER_START_END)*
81 ``noUiSlider`` most have a nonequal start and end.
82
83 1022 *(MIN_PREFERRED_MAX_WIDTH)*
84 Expected min_width <= width <= max_width
85
86 1023 *(MIN_PREFERRED_MAX_HEIGHT)*
87 Expected min_height <= height <= max_height
88
89 1024 *(CDSVIEW_FILTERS_WITH_CONNECTED)*
90 ``CDSView`` filters are not compatible with glyphs with connected topology suchs as Line and Patch.
91
92 9999 *(EXT)*
93 Indicates that a custom error check has failed.
94
95 '''
96
97 #-----------------------------------------------------------------------------
98 # Boilerplate
99 #-----------------------------------------------------------------------------
100 import logging # isort:skip
101 log = logging.getLogger(__name__)
102
103 #-----------------------------------------------------------------------------
104 # Imports
105 #-----------------------------------------------------------------------------
106
107 #-----------------------------------------------------------------------------
108 # Globals and constants
109 #-----------------------------------------------------------------------------
110
111 codes = {
112 1001: ("BAD_COLUMN_NAME", "Glyph refers to nonexistent column name. This could either be due to a misspelling or typo, or due to an expected column being missing. "), # NOQA
113 1002: ("MISSING_GLYPH", "Glyph renderer has no glyph set"),
114 1003: ("NO_SOURCE_FOR_GLYPH", "Glyph renderer has no data source"),
115 1004: ("REQUIRED_RANGE", "A required Range object is missing"),
116 1005: ("MISSING_GOOGLE_API_KEY", "Google now requires API keys for all Google Maps usage"),
117 1006: ("NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS", "LegendItem.label is a field, but renderer data sources don't match"),
118 1007: ("MISSING_MERCATOR_DIMENSION", "Mercator Tickers and Formatters must have their dimension property set to 'lat' or 'lon'"),
119 1008: ("REQUIRED_SCALE", "A required Scale object is missing"),
120 1009: ("INCOMPATIBLE_SCALE_AND_RANGE", "A Scale is incompatible with one or more ranges on the same plot dimension"),
121 1010: ("CDSVIEW_SOURCE_DOESNT_MATCH", "CDSView used by Glyph renderer must have a source that matches the Glyph renderer's data source"),
122 1011: ("MALFORMED_GRAPH_SOURCE", "The GraphSource is incorrectly configured"),
123 1012: ("INCOMPATIBLE_MAP_RANGE_TYPE", "Map plots can only support Range1d types, not data ranges"),
124 1013: ("INCOMPATIBLE_POINT_DRAW_RENDERER", "PointDrawTool renderers may only reference XYGlyph models"),
125 1014: ("INCOMPATIBLE_BOX_EDIT_RENDERER", "BoxEditTool renderers may only reference Rect glyph models"),
126 1015: ("INCOMPATIBLE_POLY_DRAW_RENDERER", "PolyDrawTool renderers may only reference MultiLine and Patches glyph models"),
127 1016: ("INCOMPATIBLE_POLY_EDIT_RENDERER", "PolyEditTool renderers may only reference MultiLine and Patches glyph models"),
128 1017: ("INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER", "PolyEditTool vertex_renderer may only reference XYGlyph models"),
129 1018: ("NO_RANGE_TOOL_RANGES", "RangeTool must have at least one of x_range or y_range configured"),
130 1019: ("DUPLICATE_FACTORS", "FactorRange must specicy a unique list of categorical factors for an axis"),
131 1020: ("BAD_EXTRA_RANGE_NAME", "An extra range name is configued with a name that does not correspond to any range"),
132 1021: ("EQUAL_SLIDER_START_END", "Slider 'start' and 'end' cannot be equal"),
133 1022: ("MIN_PREFERRED_MAX_WIDTH", "Expected min_width <= width <= max_width"),
134 1023: ("MIN_PREFERRED_MAX_HEIGHT", "Expected min_height <= height <= max_height"),
135 1024: ("CDSVIEW_FILTERS_WITH_CONNECTED", "CDSView filters are not compatible with glyphs with connected topology suchs as Line and Patch"),
136 9999: ("EXT", "Custom extension reports error"),
137 }
138
139 __all__ = ()
140
141 #-----------------------------------------------------------------------------
142 # General API
143 #-----------------------------------------------------------------------------
144
145 #-----------------------------------------------------------------------------
146 # Dev API
147 #-----------------------------------------------------------------------------
148
149 #-----------------------------------------------------------------------------
150 # Private API
151 #-----------------------------------------------------------------------------
152
153 #-----------------------------------------------------------------------------
154 # Code
155 #-----------------------------------------------------------------------------
156
157 for code in codes:
158 exec("%s = %d" % (codes[code][0], code))
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/core/validation/errors.py b/bokeh/core/validation/errors.py
--- a/bokeh/core/validation/errors.py
+++ b/bokeh/core/validation/errors.py
@@ -87,7 +87,7 @@
Expected min_height <= height <= max_height
1024 *(CDSVIEW_FILTERS_WITH_CONNECTED)*
- ``CDSView`` filters are not compatible with glyphs with connected topology suchs as Line and Patch.
+ ``CDSView`` filters are not compatible with glyphs with connected topology such as Line or Patch.
9999 *(EXT)*
Indicates that a custom error check has failed.
@@ -127,12 +127,12 @@
1016: ("INCOMPATIBLE_POLY_EDIT_RENDERER", "PolyEditTool renderers may only reference MultiLine and Patches glyph models"),
1017: ("INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER", "PolyEditTool vertex_renderer may only reference XYGlyph models"),
1018: ("NO_RANGE_TOOL_RANGES", "RangeTool must have at least one of x_range or y_range configured"),
- 1019: ("DUPLICATE_FACTORS", "FactorRange must specicy a unique list of categorical factors for an axis"),
+ 1019: ("DUPLICATE_FACTORS", "FactorRange must specify a unique list of categorical factors for an axis"),
1020: ("BAD_EXTRA_RANGE_NAME", "An extra range name is configued with a name that does not correspond to any range"),
1021: ("EQUAL_SLIDER_START_END", "Slider 'start' and 'end' cannot be equal"),
1022: ("MIN_PREFERRED_MAX_WIDTH", "Expected min_width <= width <= max_width"),
1023: ("MIN_PREFERRED_MAX_HEIGHT", "Expected min_height <= height <= max_height"),
- 1024: ("CDSVIEW_FILTERS_WITH_CONNECTED", "CDSView filters are not compatible with glyphs with connected topology suchs as Line and Patch"),
+ 1024: ("CDSVIEW_FILTERS_WITH_CONNECTED", "CDSView filters are not compatible with glyphs with connected topology such as Line or Patch"),
9999: ("EXT", "Custom extension reports error"),
}
| {"golden_diff": "diff --git a/bokeh/core/validation/errors.py b/bokeh/core/validation/errors.py\n--- a/bokeh/core/validation/errors.py\n+++ b/bokeh/core/validation/errors.py\n@@ -87,7 +87,7 @@\n Expected min_height <= height <= max_height\n \n 1024 *(CDSVIEW_FILTERS_WITH_CONNECTED)*\n- ``CDSView`` filters are not compatible with glyphs with connected topology suchs as Line and Patch.\n+ ``CDSView`` filters are not compatible with glyphs with connected topology such as Line or Patch.\n \n 9999 *(EXT)*\n Indicates that a custom error check has failed.\n@@ -127,12 +127,12 @@\n 1016: (\"INCOMPATIBLE_POLY_EDIT_RENDERER\", \"PolyEditTool renderers may only reference MultiLine and Patches glyph models\"),\n 1017: (\"INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER\", \"PolyEditTool vertex_renderer may only reference XYGlyph models\"),\n 1018: (\"NO_RANGE_TOOL_RANGES\", \"RangeTool must have at least one of x_range or y_range configured\"),\n- 1019: (\"DUPLICATE_FACTORS\", \"FactorRange must specicy a unique list of categorical factors for an axis\"),\n+ 1019: (\"DUPLICATE_FACTORS\", \"FactorRange must specify a unique list of categorical factors for an axis\"),\n 1020: (\"BAD_EXTRA_RANGE_NAME\", \"An extra range name is configued with a name that does not correspond to any range\"),\n 1021: (\"EQUAL_SLIDER_START_END\", \"Slider 'start' and 'end' cannot be equal\"),\n 1022: (\"MIN_PREFERRED_MAX_WIDTH\", \"Expected min_width <= width <= max_width\"),\n 1023: (\"MIN_PREFERRED_MAX_HEIGHT\", \"Expected min_height <= height <= max_height\"),\n- 1024: (\"CDSVIEW_FILTERS_WITH_CONNECTED\", \"CDSView filters are not compatible with glyphs with connected topology suchs as Line and Patch\"),\n+ 1024: (\"CDSVIEW_FILTERS_WITH_CONNECTED\", \"CDSView filters are not compatible with glyphs with connected topology such as Line or Patch\"),\n 9999: (\"EXT\", \"Custom extension reports error\"),\n }\n", "issue": "[BUG] Minor typos in validation error output for FactorRange and CDSView\n**Software version info**\r\nbokeh 2.0.0\r\nMozilla Firefox 74.0\r\nmacOS Catalina 10.15.3\r\n\r\n**Expected behavior:**\r\nValidation errors will have an output with correct spelling.\r\n\r\nExample one:\r\n\"FactorRange must specify a unique list of categorical factors for an axis\"\r\nExample two:\r\n\"CDSView filters are not compatible with glyphs with connected topology such as Line or Patch\"\r\n\r\n**Observed behavior:**\r\nValidation errors have an output with typos (see attached screenshot).\r\n\r\nExample one:\r\n\"FactorRange must **_specicy_** a unique list of categorical factors for an axis\"\r\nExample two:\r\n\"CDSView filters are not compatible with glyphs with connected topology **_suchs_** as Line **_and_** Patch\"\r\n\r\n**Complete, minimal, self-contained example code that reproduces the issue**\r\n\r\n**Example one:**\r\n```py\r\nfrom bokeh.io import output_notebook, show\r\nfrom bokeh.models import ColumnDataSource, FactorRange\r\nfrom bokeh.plotting import figure\r\n\r\noutput_notebook()\r\n\r\nfruits = ['Apples', 'Apples']\r\nyears = ['2015', '2016']\r\n\r\ndata = {'fruits' : fruits,\r\n '2015' : [2, 1],\r\n '2016' : [5, 3]}\r\n\r\nx = [ (fruit, year) for fruit in fruits for year in years ]\r\ncounts = sum(zip(data['2015'], data['2016']), ()) # like an hstack\r\n\r\nsource = ColumnDataSource(data=dict(x=x, counts=counts))\r\n\r\np = figure(x_range=FactorRange(*x), plot_height=250, title=\"Fruit Counts by Year\",\r\n toolbar_location=None, tools=\"\")\r\n\r\np.vbar(x='x', top='counts', width=0.9, source=source)\r\n\r\nshow(p)\r\n\r\n```\r\n\r\n**Example two**\r\n```py\r\nfrom bokeh.layouts import row\r\nfrom bokeh.plotting import output_notebook, figure, show\r\nfrom bokeh.sampledata.autompg import autompg\r\nfrom bokeh.models import CDSView, GroupFilter\r\nfrom bokeh.models.sources import ColumnDataSource\r\noutput_notebook()\r\n\r\nautompg = autompg.assign(efficient=(autompg.mpg > 20).astype(str))\r\nautompg = autompg.groupby(['yr', 'efficient']).mpg.mean().reset_index()\r\nautompg = autompg.sort_values(['efficient', 'yr'])\r\nlist_p = []\r\nsource=ColumnDataSource(autompg)\r\nfor eff in ['True', 'False']:\r\n filter_ = GroupFilter(column_name='efficient', group=eff)\r\n view = CDSView(source=source, filters=[filter_])\r\n\r\n list_p.append(figure(title=eff))\r\n list_p[-1].line(x='yr', y='mpg', source=source, view=view)\r\n\r\nshow(row(list_p))\r\n```\r\n\r\n**Screenshots or screencasts of the bug in action**\r\nExample one:\r\n<img width=\"1052\" alt=\"Screen Shot 2020-04-01 at 11 12 20 PM\" src=\"https://user-images.githubusercontent.com/18173173/78207887-f42d7380-7470-11ea-853a-f7aa905f91d2.png\">\r\n\r\nExample two:\r\n<img width=\"1036\" alt=\"Screen Shot 2020-04-01 at 11 24 33 PM\" src=\"https://user-images.githubusercontent.com/18173173/78207888-f68fcd80-7470-11ea-9370-c048914aa6d8.png\">\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' These define the standard error codes and messages for Bokeh\nvalidation checks.\n\n1001 *(BAD_COLUMN_NAME)*\n A glyph has a property set to a field name that does not correspond to any\n column in the |GlyphRenderer|'s data source.\n\n1002 *(MISSING_GLYPH)*\n A |GlyphRenderer| has no glyph configured.\n\n1003 *(NO_SOURCE_FOR_GLYPH)*\n A |GlyphRenderer| has no data source configured.\n\n1004 *(REQUIRED_RANGE)*\n A |Plot| is missing one or more required default ranges (will result in\n blank plot).\n\n1005 *(MISSING_GOOGLE_API_KEY)*\n Google Maps API now requires an API key for all use. See\n https://developers.google.com/maps/documentation/javascript/get-api-key\n for more information on how to obtain your own, to use for the\n ``api_key`` property of your Google Map plot .\n\n1006 *(NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS)*\n All data_sources on ``LegendItem.renderers`` must match when LegendItem.label\n is type field.\n\n1007 *(MISSING_MERCATOR_DIMENSION)*\n ``MercatorTicker`` and ``MercatorTickFormatter``models must have their\n ``dimension`` property set to ``'lat'`` or ``'lon'``.\n\n1008 *(REQUIRED_SCALE)*\n A |Scale| on is missing one or more required default scales (will result in\n blank plot).\n\n1009 *(INCOMPATIBLE_SCALE_AND_RANGE)*\n A |Scale| type is incompatible with one or more ranges on the same plot\n dimension (will result in blank plot).\n\n1010 *(CDSVIEW_SOURCE_DOESNT_MATCH)*\n A |GlyphRenderer| has a ``CDSView`` whose source doesn't match the ``GlyphRenderer``'s\n data source.\n\n1011 *(MALFORMED_GRAPH_SOURCE)*\n The ``GraphSource`` is incorrectly configured.\n\n1012 *(INCOMPATIBLE_MAP_RANGE_TYPE)*\n Map plots can only support ``Range1d`` types, not data ranges.\n\n1013 *(INCOMPATIBLE_POINT_DRAW_RENDERER)*\n The ``PointDrawTool`` renderers may only reference ``XYGlyph`` models.\n\n1014 *(INCOMPATIBLE_BOX_EDIT_RENDERER)*\n The ``BoxEditTool`` renderers may only reference ``Rect`` glyph models.\n\n1015 *(INCOMPATIBLE_POLY_DRAW_RENDERER)*\n The ``PolyDrawTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.\n\n1016 *(INCOMPATIBLE_POLY_EDIT_RENDERER)*\n The ``PolyEditTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.\n\n1017 *(INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER)*\n The ``PolyEditTool`` vertex_renderer may only reference ``XYGlyph`` models.\n\n1018 *(NO_RANGE_TOOL_RANGES)*\n The ``RangeTool`` must have at least one of ``x_range`` or ``y_range`` configured\n\n1019 *(DUPLICATE_FACTORS)*\n ``FactorRange`` must specify a unique list of categorical factors for an axis.\n\n1020 *(BAD_EXTRA_RANGE_NAME)*\n An extra range name is configured with a name that does not correspond to any range.\n\n1021 *(EQUAL_SLIDER_START_END)*\n ``noUiSlider`` most have a nonequal start and end.\n\n1022 *(MIN_PREFERRED_MAX_WIDTH)*\n Expected min_width <= width <= max_width\n\n1023 *(MIN_PREFERRED_MAX_HEIGHT)*\n Expected min_height <= height <= max_height\n\n1024 *(CDSVIEW_FILTERS_WITH_CONNECTED)*\n ``CDSView`` filters are not compatible with glyphs with connected topology suchs as Line and Patch.\n\n9999 *(EXT)*\n Indicates that a custom error check has failed.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\ncodes = {\n 1001: (\"BAD_COLUMN_NAME\", \"Glyph refers to nonexistent column name. This could either be due to a misspelling or typo, or due to an expected column being missing. \"), # NOQA\n 1002: (\"MISSING_GLYPH\", \"Glyph renderer has no glyph set\"),\n 1003: (\"NO_SOURCE_FOR_GLYPH\", \"Glyph renderer has no data source\"),\n 1004: (\"REQUIRED_RANGE\", \"A required Range object is missing\"),\n 1005: (\"MISSING_GOOGLE_API_KEY\", \"Google now requires API keys for all Google Maps usage\"),\n 1006: (\"NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS\", \"LegendItem.label is a field, but renderer data sources don't match\"),\n 1007: (\"MISSING_MERCATOR_DIMENSION\", \"Mercator Tickers and Formatters must have their dimension property set to 'lat' or 'lon'\"),\n 1008: (\"REQUIRED_SCALE\", \"A required Scale object is missing\"),\n 1009: (\"INCOMPATIBLE_SCALE_AND_RANGE\", \"A Scale is incompatible with one or more ranges on the same plot dimension\"),\n 1010: (\"CDSVIEW_SOURCE_DOESNT_MATCH\", \"CDSView used by Glyph renderer must have a source that matches the Glyph renderer's data source\"),\n 1011: (\"MALFORMED_GRAPH_SOURCE\", \"The GraphSource is incorrectly configured\"),\n 1012: (\"INCOMPATIBLE_MAP_RANGE_TYPE\", \"Map plots can only support Range1d types, not data ranges\"),\n 1013: (\"INCOMPATIBLE_POINT_DRAW_RENDERER\", \"PointDrawTool renderers may only reference XYGlyph models\"),\n 1014: (\"INCOMPATIBLE_BOX_EDIT_RENDERER\", \"BoxEditTool renderers may only reference Rect glyph models\"),\n 1015: (\"INCOMPATIBLE_POLY_DRAW_RENDERER\", \"PolyDrawTool renderers may only reference MultiLine and Patches glyph models\"),\n 1016: (\"INCOMPATIBLE_POLY_EDIT_RENDERER\", \"PolyEditTool renderers may only reference MultiLine and Patches glyph models\"),\n 1017: (\"INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER\", \"PolyEditTool vertex_renderer may only reference XYGlyph models\"),\n 1018: (\"NO_RANGE_TOOL_RANGES\", \"RangeTool must have at least one of x_range or y_range configured\"),\n 1019: (\"DUPLICATE_FACTORS\", \"FactorRange must specicy a unique list of categorical factors for an axis\"),\n 1020: (\"BAD_EXTRA_RANGE_NAME\", \"An extra range name is configued with a name that does not correspond to any range\"),\n 1021: (\"EQUAL_SLIDER_START_END\", \"Slider 'start' and 'end' cannot be equal\"),\n 1022: (\"MIN_PREFERRED_MAX_WIDTH\", \"Expected min_width <= width <= max_width\"),\n 1023: (\"MIN_PREFERRED_MAX_HEIGHT\", \"Expected min_height <= height <= max_height\"),\n 1024: (\"CDSVIEW_FILTERS_WITH_CONNECTED\", \"CDSView filters are not compatible with glyphs with connected topology suchs as Line and Patch\"),\n 9999: (\"EXT\", \"Custom extension reports error\"),\n}\n\n__all__ = ()\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\nfor code in codes:\n exec(\"%s = %d\" % (codes[code][0], code))\n", "path": "bokeh/core/validation/errors.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' These define the standard error codes and messages for Bokeh\nvalidation checks.\n\n1001 *(BAD_COLUMN_NAME)*\n A glyph has a property set to a field name that does not correspond to any\n column in the |GlyphRenderer|'s data source.\n\n1002 *(MISSING_GLYPH)*\n A |GlyphRenderer| has no glyph configured.\n\n1003 *(NO_SOURCE_FOR_GLYPH)*\n A |GlyphRenderer| has no data source configured.\n\n1004 *(REQUIRED_RANGE)*\n A |Plot| is missing one or more required default ranges (will result in\n blank plot).\n\n1005 *(MISSING_GOOGLE_API_KEY)*\n Google Maps API now requires an API key for all use. See\n https://developers.google.com/maps/documentation/javascript/get-api-key\n for more information on how to obtain your own, to use for the\n ``api_key`` property of your Google Map plot .\n\n1006 *(NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS)*\n All data_sources on ``LegendItem.renderers`` must match when LegendItem.label\n is type field.\n\n1007 *(MISSING_MERCATOR_DIMENSION)*\n ``MercatorTicker`` and ``MercatorTickFormatter``models must have their\n ``dimension`` property set to ``'lat'`` or ``'lon'``.\n\n1008 *(REQUIRED_SCALE)*\n A |Scale| on is missing one or more required default scales (will result in\n blank plot).\n\n1009 *(INCOMPATIBLE_SCALE_AND_RANGE)*\n A |Scale| type is incompatible with one or more ranges on the same plot\n dimension (will result in blank plot).\n\n1010 *(CDSVIEW_SOURCE_DOESNT_MATCH)*\n A |GlyphRenderer| has a ``CDSView`` whose source doesn't match the ``GlyphRenderer``'s\n data source.\n\n1011 *(MALFORMED_GRAPH_SOURCE)*\n The ``GraphSource`` is incorrectly configured.\n\n1012 *(INCOMPATIBLE_MAP_RANGE_TYPE)*\n Map plots can only support ``Range1d`` types, not data ranges.\n\n1013 *(INCOMPATIBLE_POINT_DRAW_RENDERER)*\n The ``PointDrawTool`` renderers may only reference ``XYGlyph`` models.\n\n1014 *(INCOMPATIBLE_BOX_EDIT_RENDERER)*\n The ``BoxEditTool`` renderers may only reference ``Rect`` glyph models.\n\n1015 *(INCOMPATIBLE_POLY_DRAW_RENDERER)*\n The ``PolyDrawTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.\n\n1016 *(INCOMPATIBLE_POLY_EDIT_RENDERER)*\n The ``PolyEditTool`` renderers may only reference ``MultiLine`` and ``Patches`` glyph models.\n\n1017 *(INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER)*\n The ``PolyEditTool`` vertex_renderer may only reference ``XYGlyph`` models.\n\n1018 *(NO_RANGE_TOOL_RANGES)*\n The ``RangeTool`` must have at least one of ``x_range`` or ``y_range`` configured\n\n1019 *(DUPLICATE_FACTORS)*\n ``FactorRange`` must specify a unique list of categorical factors for an axis.\n\n1020 *(BAD_EXTRA_RANGE_NAME)*\n An extra range name is configured with a name that does not correspond to any range.\n\n1021 *(EQUAL_SLIDER_START_END)*\n ``noUiSlider`` most have a nonequal start and end.\n\n1022 *(MIN_PREFERRED_MAX_WIDTH)*\n Expected min_width <= width <= max_width\n\n1023 *(MIN_PREFERRED_MAX_HEIGHT)*\n Expected min_height <= height <= max_height\n\n1024 *(CDSVIEW_FILTERS_WITH_CONNECTED)*\n ``CDSView`` filters are not compatible with glyphs with connected topology such as Line or Patch.\n\n9999 *(EXT)*\n Indicates that a custom error check has failed.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\ncodes = {\n 1001: (\"BAD_COLUMN_NAME\", \"Glyph refers to nonexistent column name. This could either be due to a misspelling or typo, or due to an expected column being missing. \"), # NOQA\n 1002: (\"MISSING_GLYPH\", \"Glyph renderer has no glyph set\"),\n 1003: (\"NO_SOURCE_FOR_GLYPH\", \"Glyph renderer has no data source\"),\n 1004: (\"REQUIRED_RANGE\", \"A required Range object is missing\"),\n 1005: (\"MISSING_GOOGLE_API_KEY\", \"Google now requires API keys for all Google Maps usage\"),\n 1006: (\"NON_MATCHING_DATA_SOURCES_ON_LEGEND_ITEM_RENDERERS\", \"LegendItem.label is a field, but renderer data sources don't match\"),\n 1007: (\"MISSING_MERCATOR_DIMENSION\", \"Mercator Tickers and Formatters must have their dimension property set to 'lat' or 'lon'\"),\n 1008: (\"REQUIRED_SCALE\", \"A required Scale object is missing\"),\n 1009: (\"INCOMPATIBLE_SCALE_AND_RANGE\", \"A Scale is incompatible with one or more ranges on the same plot dimension\"),\n 1010: (\"CDSVIEW_SOURCE_DOESNT_MATCH\", \"CDSView used by Glyph renderer must have a source that matches the Glyph renderer's data source\"),\n 1011: (\"MALFORMED_GRAPH_SOURCE\", \"The GraphSource is incorrectly configured\"),\n 1012: (\"INCOMPATIBLE_MAP_RANGE_TYPE\", \"Map plots can only support Range1d types, not data ranges\"),\n 1013: (\"INCOMPATIBLE_POINT_DRAW_RENDERER\", \"PointDrawTool renderers may only reference XYGlyph models\"),\n 1014: (\"INCOMPATIBLE_BOX_EDIT_RENDERER\", \"BoxEditTool renderers may only reference Rect glyph models\"),\n 1015: (\"INCOMPATIBLE_POLY_DRAW_RENDERER\", \"PolyDrawTool renderers may only reference MultiLine and Patches glyph models\"),\n 1016: (\"INCOMPATIBLE_POLY_EDIT_RENDERER\", \"PolyEditTool renderers may only reference MultiLine and Patches glyph models\"),\n 1017: (\"INCOMPATIBLE_POLY_EDIT_VERTEX_RENDERER\", \"PolyEditTool vertex_renderer may only reference XYGlyph models\"),\n 1018: (\"NO_RANGE_TOOL_RANGES\", \"RangeTool must have at least one of x_range or y_range configured\"),\n 1019: (\"DUPLICATE_FACTORS\", \"FactorRange must specify a unique list of categorical factors for an axis\"),\n 1020: (\"BAD_EXTRA_RANGE_NAME\", \"An extra range name is configued with a name that does not correspond to any range\"),\n 1021: (\"EQUAL_SLIDER_START_END\", \"Slider 'start' and 'end' cannot be equal\"),\n 1022: (\"MIN_PREFERRED_MAX_WIDTH\", \"Expected min_width <= width <= max_width\"),\n 1023: (\"MIN_PREFERRED_MAX_HEIGHT\", \"Expected min_height <= height <= max_height\"),\n 1024: (\"CDSVIEW_FILTERS_WITH_CONNECTED\", \"CDSView filters are not compatible with glyphs with connected topology such as Line or Patch\"),\n 9999: (\"EXT\", \"Custom extension reports error\"),\n}\n\n__all__ = ()\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\nfor code in codes:\n exec(\"%s = %d\" % (codes[code][0], code))\n", "path": "bokeh/core/validation/errors.py"}]} | 3,319 | 522 |
gh_patches_debug_25356 | rasdani/github-patches | git_diff | python-discord__bot-448 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The off-topic channel name updating task fails on non-success API response
The background task we create to update off-topic names at midnight UTC fails if it receives a non-success API response. The reason is that our `bot.api_client` will raise the `bot.api.ResponseCodeError` exception on non-success response status codes. This means that the off-topic channel names won't be updated again until either the bot is restarted or the task is started manually again by an admin.
The relevant lines of code:
https://github.com/python-discord/bot/blob/e70c96248bd7b548412811a4f1ffe88bed41f815/bot/cogs/off_topic_names.py#L59-L61
To handle it, we could simply include a `try-except` block and log the exception in the `except` block. I'm not sure if we want to log the entire exception, since the exception text could be a [massive HTML-response generated by cloudflare](https://paste.pythondiscord.com/ohibicedif). Logging the failure with the response code should generally give us enough to determine the cause of the failure.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/off_topic_names.py`
Content:
```
1 import asyncio
2 import difflib
3 import logging
4 from datetime import datetime, timedelta
5
6 from discord import Colour, Embed
7 from discord.ext.commands import BadArgument, Bot, Cog, Context, Converter, group
8
9 from bot.constants import Channels, MODERATION_ROLES
10 from bot.decorators import with_role
11 from bot.pagination import LinePaginator
12
13
14 CHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)
15 log = logging.getLogger(__name__)
16
17
18 class OffTopicName(Converter):
19 """A converter that ensures an added off-topic name is valid."""
20
21 @staticmethod
22 async def convert(ctx: Context, argument: str) -> str:
23 """Attempt to replace any invalid characters with their approximate Unicode equivalent."""
24 allowed_characters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ!?'`-"
25
26 if not (2 <= len(argument) <= 96):
27 raise BadArgument("Channel name must be between 2 and 96 chars long")
28
29 elif not all(c.isalnum() or c in allowed_characters for c in argument):
30 raise BadArgument(
31 "Channel name must only consist of "
32 "alphanumeric characters, minus signs or apostrophes."
33 )
34
35 # Replace invalid characters with unicode alternatives.
36 table = str.maketrans(
37 allowed_characters, '𝖠𝖡𝖢𝖣𝖤𝖥𝖦𝖧𝖨𝖩𝖪𝖫𝖬𝖭𝖮𝖯𝖰𝖱𝖲𝖳𝖴𝖵𝖶𝖷𝖸𝖹ǃ?’’-'
38 )
39 return argument.translate(table)
40
41
42 async def update_names(bot: Bot) -> None:
43 """Background updater task that performs the daily channel name update."""
44 while True:
45 # Since we truncate the compute timedelta to seconds, we add one second to ensure
46 # we go past midnight in the `seconds_to_sleep` set below.
47 today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)
48 next_midnight = today_at_midnight + timedelta(days=1)
49 seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1
50 await asyncio.sleep(seconds_to_sleep)
51
52 channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(
53 'bot/off-topic-channel-names', params={'random_items': 3}
54 )
55 channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)
56
57 await channel_0.edit(name=f'ot0-{channel_0_name}')
58 await channel_1.edit(name=f'ot1-{channel_1_name}')
59 await channel_2.edit(name=f'ot2-{channel_2_name}')
60 log.debug(
61 "Updated off-topic channel names to"
62 f" {channel_0_name}, {channel_1_name} and {channel_2_name}"
63 )
64
65
66 class OffTopicNames(Cog):
67 """Commands related to managing the off-topic category channel names."""
68
69 def __init__(self, bot: Bot):
70 self.bot = bot
71 self.updater_task = None
72
73 def cog_unload(self) -> None:
74 """Cancel any running updater tasks on cog unload."""
75 if self.updater_task is not None:
76 self.updater_task.cancel()
77
78 @Cog.listener()
79 async def on_ready(self) -> None:
80 """Start off-topic channel updating event loop if it hasn't already started."""
81 if self.updater_task is None:
82 coro = update_names(self.bot)
83 self.updater_task = self.bot.loop.create_task(coro)
84
85 @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)
86 @with_role(*MODERATION_ROLES)
87 async def otname_group(self, ctx: Context) -> None:
88 """Add or list items from the off-topic channel name rotation."""
89 await ctx.invoke(self.bot.get_command("help"), "otname")
90
91 @otname_group.command(name='add', aliases=('a',))
92 @with_role(*MODERATION_ROLES)
93 async def add_command(self, ctx: Context, *names: OffTopicName) -> None:
94 """Adds a new off-topic name to the rotation."""
95 # Chain multiple words to a single one
96 name = "-".join(names)
97
98 await self.bot.api_client.post(f'bot/off-topic-channel-names', params={'name': name})
99 log.info(
100 f"{ctx.author.name}#{ctx.author.discriminator}"
101 f" added the off-topic channel name '{name}"
102 )
103 await ctx.send(f":ok_hand: Added `{name}` to the names list.")
104
105 @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))
106 @with_role(*MODERATION_ROLES)
107 async def delete_command(self, ctx: Context, *names: OffTopicName) -> None:
108 """Removes a off-topic name from the rotation."""
109 # Chain multiple words to a single one
110 name = "-".join(names)
111
112 await self.bot.api_client.delete(f'bot/off-topic-channel-names/{name}')
113 log.info(
114 f"{ctx.author.name}#{ctx.author.discriminator}"
115 f" deleted the off-topic channel name '{name}"
116 )
117 await ctx.send(f":ok_hand: Removed `{name}` from the names list.")
118
119 @otname_group.command(name='list', aliases=('l',))
120 @with_role(*MODERATION_ROLES)
121 async def list_command(self, ctx: Context) -> None:
122 """
123 Lists all currently known off-topic channel names in a paginator.
124
125 Restricted to Moderator and above to not spoil the surprise.
126 """
127 result = await self.bot.api_client.get('bot/off-topic-channel-names')
128 lines = sorted(f"• {name}" for name in result)
129 embed = Embed(
130 title=f"Known off-topic names (`{len(result)}` total)",
131 colour=Colour.blue()
132 )
133 if result:
134 await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)
135 else:
136 embed.description = "Hmmm, seems like there's nothing here yet."
137 await ctx.send(embed=embed)
138
139 @otname_group.command(name='search', aliases=('s',))
140 @with_role(*MODERATION_ROLES)
141 async def search_command(self, ctx: Context, *, query: OffTopicName) -> None:
142 """Search for an off-topic name."""
143 result = await self.bot.api_client.get('bot/off-topic-channel-names')
144 in_matches = {name for name in result if query in name}
145 close_matches = difflib.get_close_matches(query, result, n=10, cutoff=0.70)
146 lines = sorted(f"• {name}" for name in in_matches.union(close_matches))
147 embed = Embed(
148 title=f"Query results",
149 colour=Colour.blue()
150 )
151
152 if lines:
153 await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)
154 else:
155 embed.description = "Nothing found."
156 await ctx.send(embed=embed)
157
158
159 def setup(bot: Bot) -> None:
160 """Off topic names cog load."""
161 bot.add_cog(OffTopicNames(bot))
162 log.info("Cog loaded: OffTopicNames")
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/off_topic_names.py b/bot/cogs/off_topic_names.py
--- a/bot/cogs/off_topic_names.py
+++ b/bot/cogs/off_topic_names.py
@@ -6,6 +6,7 @@
from discord import Colour, Embed
from discord.ext.commands import BadArgument, Bot, Cog, Context, Converter, group
+from bot.api import ResponseCodeError
from bot.constants import Channels, MODERATION_ROLES
from bot.decorators import with_role
from bot.pagination import LinePaginator
@@ -49,9 +50,13 @@
seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1
await asyncio.sleep(seconds_to_sleep)
- channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(
- 'bot/off-topic-channel-names', params={'random_items': 3}
- )
+ try:
+ channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(
+ 'bot/off-topic-channel-names', params={'random_items': 3}
+ )
+ except ResponseCodeError as e:
+ log.error(f"Failed to get new off topic channel names: code {e.response.status}")
+ continue
channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)
await channel_0.edit(name=f'ot0-{channel_0_name}')
| {"golden_diff": "diff --git a/bot/cogs/off_topic_names.py b/bot/cogs/off_topic_names.py\n--- a/bot/cogs/off_topic_names.py\n+++ b/bot/cogs/off_topic_names.py\n@@ -6,6 +6,7 @@\n from discord import Colour, Embed\n from discord.ext.commands import BadArgument, Bot, Cog, Context, Converter, group\n \n+from bot.api import ResponseCodeError\n from bot.constants import Channels, MODERATION_ROLES\n from bot.decorators import with_role\n from bot.pagination import LinePaginator\n@@ -49,9 +50,13 @@\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1\n await asyncio.sleep(seconds_to_sleep)\n \n- channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(\n- 'bot/off-topic-channel-names', params={'random_items': 3}\n- )\n+ try:\n+ channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(\n+ 'bot/off-topic-channel-names', params={'random_items': 3}\n+ )\n+ except ResponseCodeError as e:\n+ log.error(f\"Failed to get new off topic channel names: code {e.response.status}\")\n+ continue\n channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)\n \n await channel_0.edit(name=f'ot0-{channel_0_name}')\n", "issue": "The off-topic channel name updating task fails on non-success API response\nThe background task we create to update off-topic names at midnight UTC fails if it receives a non-success API response. The reason is that our `bot.api_client` will raise the `bot.api.ResponseCodeError` exception on non-success response status codes. This means that the off-topic channel names won't be updated again until either the bot is restarted or the task is started manually again by an admin.\r\n\r\nThe relevant lines of code:\r\nhttps://github.com/python-discord/bot/blob/e70c96248bd7b548412811a4f1ffe88bed41f815/bot/cogs/off_topic_names.py#L59-L61\r\n\r\nTo handle it, we could simply include a `try-except` block and log the exception in the `except` block. I'm not sure if we want to log the entire exception, since the exception text could be a [massive HTML-response generated by cloudflare](https://paste.pythondiscord.com/ohibicedif). Logging the failure with the response code should generally give us enough to determine the cause of the failure.\n", "before_files": [{"content": "import asyncio\nimport difflib\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import BadArgument, Bot, Cog, Context, Converter, group\n\nfrom bot.constants import Channels, MODERATION_ROLES\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\n\nCHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)\nlog = logging.getLogger(__name__)\n\n\nclass OffTopicName(Converter):\n \"\"\"A converter that ensures an added off-topic name is valid.\"\"\"\n\n @staticmethod\n async def convert(ctx: Context, argument: str) -> str:\n \"\"\"Attempt to replace any invalid characters with their approximate Unicode equivalent.\"\"\"\n allowed_characters = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ!?'`-\"\n\n if not (2 <= len(argument) <= 96):\n raise BadArgument(\"Channel name must be between 2 and 96 chars long\")\n\n elif not all(c.isalnum() or c in allowed_characters for c in argument):\n raise BadArgument(\n \"Channel name must only consist of \"\n \"alphanumeric characters, minus signs or apostrophes.\"\n )\n\n # Replace invalid characters with unicode alternatives.\n table = str.maketrans(\n allowed_characters, '\ud835\udda0\ud835\udda1\ud835\udda2\ud835\udda3\ud835\udda4\ud835\udda5\ud835\udda6\ud835\udda7\ud835\udda8\ud835\udda9\ud835\uddaa\ud835\uddab\ud835\uddac\ud835\uddad\ud835\uddae\ud835\uddaf\ud835\uddb0\ud835\uddb1\ud835\uddb2\ud835\uddb3\ud835\uddb4\ud835\uddb5\ud835\uddb6\ud835\uddb7\ud835\uddb8\ud835\uddb9\u01c3\uff1f\u2019\u2019-'\n )\n return argument.translate(table)\n\n\nasync def update_names(bot: Bot) -> None:\n \"\"\"Background updater task that performs the daily channel name update.\"\"\"\n while True:\n # Since we truncate the compute timedelta to seconds, we add one second to ensure\n # we go past midnight in the `seconds_to_sleep` set below.\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\n next_midnight = today_at_midnight + timedelta(days=1)\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1\n await asyncio.sleep(seconds_to_sleep)\n\n channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(\n 'bot/off-topic-channel-names', params={'random_items': 3}\n )\n channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)\n\n await channel_0.edit(name=f'ot0-{channel_0_name}')\n await channel_1.edit(name=f'ot1-{channel_1_name}')\n await channel_2.edit(name=f'ot2-{channel_2_name}')\n log.debug(\n \"Updated off-topic channel names to\"\n f\" {channel_0_name}, {channel_1_name} and {channel_2_name}\"\n )\n\n\nclass OffTopicNames(Cog):\n \"\"\"Commands related to managing the off-topic category channel names.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.updater_task = None\n\n def cog_unload(self) -> None:\n \"\"\"Cancel any running updater tasks on cog unload.\"\"\"\n if self.updater_task is not None:\n self.updater_task.cancel()\n\n @Cog.listener()\n async def on_ready(self) -> None:\n \"\"\"Start off-topic channel updating event loop if it hasn't already started.\"\"\"\n if self.updater_task is None:\n coro = update_names(self.bot)\n self.updater_task = self.bot.loop.create_task(coro)\n\n @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)\n @with_role(*MODERATION_ROLES)\n async def otname_group(self, ctx: Context) -> None:\n \"\"\"Add or list items from the off-topic channel name rotation.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"otname\")\n\n @otname_group.command(name='add', aliases=('a',))\n @with_role(*MODERATION_ROLES)\n async def add_command(self, ctx: Context, *names: OffTopicName) -> None:\n \"\"\"Adds a new off-topic name to the rotation.\"\"\"\n # Chain multiple words to a single one\n name = \"-\".join(names)\n\n await self.bot.api_client.post(f'bot/off-topic-channel-names', params={'name': name})\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" added the off-topic channel name '{name}\"\n )\n await ctx.send(f\":ok_hand: Added `{name}` to the names list.\")\n\n @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))\n @with_role(*MODERATION_ROLES)\n async def delete_command(self, ctx: Context, *names: OffTopicName) -> None:\n \"\"\"Removes a off-topic name from the rotation.\"\"\"\n # Chain multiple words to a single one\n name = \"-\".join(names)\n\n await self.bot.api_client.delete(f'bot/off-topic-channel-names/{name}')\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" deleted the off-topic channel name '{name}\"\n )\n await ctx.send(f\":ok_hand: Removed `{name}` from the names list.\")\n\n @otname_group.command(name='list', aliases=('l',))\n @with_role(*MODERATION_ROLES)\n async def list_command(self, ctx: Context) -> None:\n \"\"\"\n Lists all currently known off-topic channel names in a paginator.\n\n Restricted to Moderator and above to not spoil the surprise.\n \"\"\"\n result = await self.bot.api_client.get('bot/off-topic-channel-names')\n lines = sorted(f\"\u2022 {name}\" for name in result)\n embed = Embed(\n title=f\"Known off-topic names (`{len(result)}` total)\",\n colour=Colour.blue()\n )\n if result:\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n else:\n embed.description = \"Hmmm, seems like there's nothing here yet.\"\n await ctx.send(embed=embed)\n\n @otname_group.command(name='search', aliases=('s',))\n @with_role(*MODERATION_ROLES)\n async def search_command(self, ctx: Context, *, query: OffTopicName) -> None:\n \"\"\"Search for an off-topic name.\"\"\"\n result = await self.bot.api_client.get('bot/off-topic-channel-names')\n in_matches = {name for name in result if query in name}\n close_matches = difflib.get_close_matches(query, result, n=10, cutoff=0.70)\n lines = sorted(f\"\u2022 {name}\" for name in in_matches.union(close_matches))\n embed = Embed(\n title=f\"Query results\",\n colour=Colour.blue()\n )\n\n if lines:\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n else:\n embed.description = \"Nothing found.\"\n await ctx.send(embed=embed)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Off topic names cog load.\"\"\"\n bot.add_cog(OffTopicNames(bot))\n log.info(\"Cog loaded: OffTopicNames\")\n", "path": "bot/cogs/off_topic_names.py"}], "after_files": [{"content": "import asyncio\nimport difflib\nimport logging\nfrom datetime import datetime, timedelta\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import BadArgument, Bot, Cog, Context, Converter, group\n\nfrom bot.api import ResponseCodeError\nfrom bot.constants import Channels, MODERATION_ROLES\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\n\nCHANNELS = (Channels.off_topic_0, Channels.off_topic_1, Channels.off_topic_2)\nlog = logging.getLogger(__name__)\n\n\nclass OffTopicName(Converter):\n \"\"\"A converter that ensures an added off-topic name is valid.\"\"\"\n\n @staticmethod\n async def convert(ctx: Context, argument: str) -> str:\n \"\"\"Attempt to replace any invalid characters with their approximate Unicode equivalent.\"\"\"\n allowed_characters = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ!?'`-\"\n\n if not (2 <= len(argument) <= 96):\n raise BadArgument(\"Channel name must be between 2 and 96 chars long\")\n\n elif not all(c.isalnum() or c in allowed_characters for c in argument):\n raise BadArgument(\n \"Channel name must only consist of \"\n \"alphanumeric characters, minus signs or apostrophes.\"\n )\n\n # Replace invalid characters with unicode alternatives.\n table = str.maketrans(\n allowed_characters, '\ud835\udda0\ud835\udda1\ud835\udda2\ud835\udda3\ud835\udda4\ud835\udda5\ud835\udda6\ud835\udda7\ud835\udda8\ud835\udda9\ud835\uddaa\ud835\uddab\ud835\uddac\ud835\uddad\ud835\uddae\ud835\uddaf\ud835\uddb0\ud835\uddb1\ud835\uddb2\ud835\uddb3\ud835\uddb4\ud835\uddb5\ud835\uddb6\ud835\uddb7\ud835\uddb8\ud835\uddb9\u01c3\uff1f\u2019\u2019-'\n )\n return argument.translate(table)\n\n\nasync def update_names(bot: Bot) -> None:\n \"\"\"Background updater task that performs the daily channel name update.\"\"\"\n while True:\n # Since we truncate the compute timedelta to seconds, we add one second to ensure\n # we go past midnight in the `seconds_to_sleep` set below.\n today_at_midnight = datetime.utcnow().replace(microsecond=0, second=0, minute=0, hour=0)\n next_midnight = today_at_midnight + timedelta(days=1)\n seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1\n await asyncio.sleep(seconds_to_sleep)\n\n try:\n channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get(\n 'bot/off-topic-channel-names', params={'random_items': 3}\n )\n except ResponseCodeError as e:\n log.error(f\"Failed to get new off topic channel names: code {e.response.status}\")\n continue\n channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS)\n\n await channel_0.edit(name=f'ot0-{channel_0_name}')\n await channel_1.edit(name=f'ot1-{channel_1_name}')\n await channel_2.edit(name=f'ot2-{channel_2_name}')\n log.debug(\n \"Updated off-topic channel names to\"\n f\" {channel_0_name}, {channel_1_name} and {channel_2_name}\"\n )\n\n\nclass OffTopicNames(Cog):\n \"\"\"Commands related to managing the off-topic category channel names.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.updater_task = None\n\n def cog_unload(self) -> None:\n \"\"\"Cancel any running updater tasks on cog unload.\"\"\"\n if self.updater_task is not None:\n self.updater_task.cancel()\n\n @Cog.listener()\n async def on_ready(self) -> None:\n \"\"\"Start off-topic channel updating event loop if it hasn't already started.\"\"\"\n if self.updater_task is None:\n coro = update_names(self.bot)\n self.updater_task = self.bot.loop.create_task(coro)\n\n @group(name='otname', aliases=('otnames', 'otn'), invoke_without_command=True)\n @with_role(*MODERATION_ROLES)\n async def otname_group(self, ctx: Context) -> None:\n \"\"\"Add or list items from the off-topic channel name rotation.\"\"\"\n await ctx.invoke(self.bot.get_command(\"help\"), \"otname\")\n\n @otname_group.command(name='add', aliases=('a',))\n @with_role(*MODERATION_ROLES)\n async def add_command(self, ctx: Context, *names: OffTopicName) -> None:\n \"\"\"Adds a new off-topic name to the rotation.\"\"\"\n # Chain multiple words to a single one\n name = \"-\".join(names)\n\n await self.bot.api_client.post(f'bot/off-topic-channel-names', params={'name': name})\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" added the off-topic channel name '{name}\"\n )\n await ctx.send(f\":ok_hand: Added `{name}` to the names list.\")\n\n @otname_group.command(name='delete', aliases=('remove', 'rm', 'del', 'd'))\n @with_role(*MODERATION_ROLES)\n async def delete_command(self, ctx: Context, *names: OffTopicName) -> None:\n \"\"\"Removes a off-topic name from the rotation.\"\"\"\n # Chain multiple words to a single one\n name = \"-\".join(names)\n\n await self.bot.api_client.delete(f'bot/off-topic-channel-names/{name}')\n log.info(\n f\"{ctx.author.name}#{ctx.author.discriminator}\"\n f\" deleted the off-topic channel name '{name}\"\n )\n await ctx.send(f\":ok_hand: Removed `{name}` from the names list.\")\n\n @otname_group.command(name='list', aliases=('l',))\n @with_role(*MODERATION_ROLES)\n async def list_command(self, ctx: Context) -> None:\n \"\"\"\n Lists all currently known off-topic channel names in a paginator.\n\n Restricted to Moderator and above to not spoil the surprise.\n \"\"\"\n result = await self.bot.api_client.get('bot/off-topic-channel-names')\n lines = sorted(f\"\u2022 {name}\" for name in result)\n embed = Embed(\n title=f\"Known off-topic names (`{len(result)}` total)\",\n colour=Colour.blue()\n )\n if result:\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n else:\n embed.description = \"Hmmm, seems like there's nothing here yet.\"\n await ctx.send(embed=embed)\n\n @otname_group.command(name='search', aliases=('s',))\n @with_role(*MODERATION_ROLES)\n async def search_command(self, ctx: Context, *, query: OffTopicName) -> None:\n \"\"\"Search for an off-topic name.\"\"\"\n result = await self.bot.api_client.get('bot/off-topic-channel-names')\n in_matches = {name for name in result if query in name}\n close_matches = difflib.get_close_matches(query, result, n=10, cutoff=0.70)\n lines = sorted(f\"\u2022 {name}\" for name in in_matches.union(close_matches))\n embed = Embed(\n title=f\"Query results\",\n colour=Colour.blue()\n )\n\n if lines:\n await LinePaginator.paginate(lines, ctx, embed, max_size=400, empty=False)\n else:\n embed.description = \"Nothing found.\"\n await ctx.send(embed=embed)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Off topic names cog load.\"\"\"\n bot.add_cog(OffTopicNames(bot))\n log.info(\"Cog loaded: OffTopicNames\")\n", "path": "bot/cogs/off_topic_names.py"}]} | 2,485 | 324 |
gh_patches_debug_15891 | rasdani/github-patches | git_diff | openshift__openshift-ansible-7849 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Any chance of getting this playbook to work without cinder ?
#### Description
Provide a brief description of your issue here. For example:
We have access to an openstack without cinder. Any chance of getting this playbook to work without cinder ?
##### Version
Please put the following version information in the code block
indicated below.
* Your ansible version per `ansible --version`
If you're operating from a **git clone**:
* The output of `git describe`
If you're running from playbooks installed via RPM or
`atomic-openshift-utils`
* The output of `rpm -q atomic-openshift-utils openshift-ansible`
Place the output between the code block below:
```
$ ansible --version
ansible 2.5.0
config file = /home/arthur/local/openshift/ansible.cfg
configured module search path = [u'/home/arthur/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/arthur/.virtualenvs/ansible/local/lib/python2.7/site-packages/ansible
executable location = /home/arthur/.virtualenvs/ansible/bin/ansible
python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
$ git describe
openshift-ansible-3.10.0-0.15.0-9-gf28aba492
```
##### Steps To Reproduce
When I saw the following comment in all.yml
```
# If you want to use the VM storage instead of Cinder volumes, set this to `true`.
```
I thought this might be possible but I get the following error when executing the ansible playbook :
```
$ ansible-playbook [snip]
[snip]
in endpoint_data_for raise
exceptions.EndpointNotFound(msg) keystoneauth1.exceptions.catalog.EndpointNotFound: public
endpoint for volumev2 service in RegionOne region not found
[snip]
$ openstack volume list
public endpoint for volumev2 service in RegionOne region not found
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playbooks/openstack/inventory.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 This is an Ansible dynamic inventory for OpenStack.
4
5 It requires your OpenStack credentials to be set in clouds.yaml or your shell
6 environment.
7
8 """
9
10 from __future__ import print_function
11
12 from collections import Mapping
13 import json
14 import os
15
16 import shade
17
18
19 def base_openshift_inventory(cluster_hosts):
20 '''Set the base openshift inventory.'''
21 inventory = {}
22
23 masters = [server.name for server in cluster_hosts
24 if server.metadata['host-type'] == 'master']
25
26 etcd = [server.name for server in cluster_hosts
27 if server.metadata['host-type'] == 'etcd']
28 if not etcd:
29 etcd = masters
30
31 infra_hosts = [server.name for server in cluster_hosts
32 if server.metadata['host-type'] == 'node' and
33 server.metadata['sub-host-type'] == 'infra']
34
35 app = [server.name for server in cluster_hosts
36 if server.metadata['host-type'] == 'node' and
37 server.metadata['sub-host-type'] == 'app']
38
39 cns = [server.name for server in cluster_hosts
40 if server.metadata['host-type'] == 'cns']
41
42 nodes = list(set(masters + infra_hosts + app + cns))
43
44 dns = [server.name for server in cluster_hosts
45 if server.metadata['host-type'] == 'dns']
46
47 load_balancers = [server.name for server in cluster_hosts
48 if server.metadata['host-type'] == 'lb']
49
50 osev3 = list(set(nodes + etcd + load_balancers))
51
52 inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}
53 inventory['OSEv3'] = {'hosts': osev3}
54 inventory['masters'] = {'hosts': masters}
55 inventory['etcd'] = {'hosts': etcd}
56 inventory['nodes'] = {'hosts': nodes}
57 inventory['infra_hosts'] = {'hosts': infra_hosts}
58 inventory['app'] = {'hosts': app}
59 inventory['glusterfs'] = {'hosts': cns}
60 inventory['dns'] = {'hosts': dns}
61 inventory['lb'] = {'hosts': load_balancers}
62 inventory['localhost'] = {'ansible_connection': 'local'}
63
64 return inventory
65
66
67 def get_docker_storage_mountpoints(volumes):
68 '''Check volumes to see if they're being used for docker storage'''
69 docker_storage_mountpoints = {}
70 for volume in volumes:
71 if volume.metadata.get('purpose') == "openshift_docker_storage":
72 for attachment in volume.attachments:
73 if attachment.server_id in docker_storage_mountpoints:
74 docker_storage_mountpoints[attachment.server_id].append(attachment.device)
75 else:
76 docker_storage_mountpoints[attachment.server_id] = [attachment.device]
77 return docker_storage_mountpoints
78
79
80 def _get_hostvars(server, docker_storage_mountpoints):
81 ssh_ip_address = server.public_v4 or server.private_v4
82 hostvars = {
83 'ansible_host': ssh_ip_address
84 }
85
86 public_v4 = server.public_v4 or server.private_v4
87 if public_v4:
88 hostvars['public_v4'] = server.public_v4
89 hostvars['openshift_public_ip'] = server.public_v4
90 # TODO(shadower): what about multiple networks?
91 if server.private_v4:
92 hostvars['private_v4'] = server.private_v4
93 hostvars['openshift_ip'] = server.private_v4
94
95 # NOTE(shadower): Yes, we set both hostname and IP to the private
96 # IP address for each node. OpenStack doesn't resolve nodes by
97 # name at all, so using a hostname here would require an internal
98 # DNS which would complicate the setup and potentially introduce
99 # performance issues.
100 hostvars['openshift_hostname'] = server.metadata.get(
101 'openshift_hostname', server.private_v4)
102 hostvars['openshift_public_hostname'] = server.name
103
104 if server.metadata['host-type'] == 'cns':
105 hostvars['glusterfs_devices'] = ['/dev/nvme0n1']
106
107 node_labels = server.metadata.get('node_labels')
108 # NOTE(shadower): the node_labels value must be a dict not string
109 if not isinstance(node_labels, Mapping):
110 node_labels = json.loads(node_labels)
111
112 if node_labels:
113 hostvars['openshift_node_labels'] = node_labels
114
115 # check for attached docker storage volumes
116 if 'os-extended-volumes:volumes_attached' in server:
117 if server.id in docker_storage_mountpoints:
118 hostvars['docker_storage_mountpoints'] = ' '.join(
119 docker_storage_mountpoints[server.id])
120 return hostvars
121
122
123 def build_inventory():
124 '''Build the dynamic inventory.'''
125 cloud = shade.openstack_cloud()
126
127 # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER`
128 # environment variable.
129 cluster_hosts = [
130 server for server in cloud.list_servers()
131 if 'metadata' in server and 'clusterid' in server.metadata]
132
133 inventory = base_openshift_inventory(cluster_hosts)
134
135 for server in cluster_hosts:
136 if 'group' in server.metadata:
137 group = server.metadata.get('group')
138 if group not in inventory:
139 inventory[group] = {'hosts': []}
140 inventory[group]['hosts'].append(server.name)
141
142 inventory['_meta'] = {'hostvars': {}}
143
144 # cinder volumes used for docker storage
145 docker_storage_mountpoints = get_docker_storage_mountpoints(
146 cloud.list_volumes())
147 for server in cluster_hosts:
148 inventory['_meta']['hostvars'][server.name] = _get_hostvars(
149 server,
150 docker_storage_mountpoints)
151
152 stout = _get_stack_outputs(cloud)
153 if stout is not None:
154 try:
155 inventory['localhost'].update({
156 'openshift_openstack_api_lb_provider':
157 stout['api_lb_provider'],
158 'openshift_openstack_api_lb_port_id':
159 stout['api_lb_vip_port_id'],
160 'openshift_openstack_api_lb_sg_id':
161 stout['api_lb_sg_id']})
162 except KeyError:
163 pass # Not an API load balanced deployment
164
165 try:
166 inventory['OSEv3']['vars'] = _get_kuryr_vars(cloud, stout)
167 except KeyError:
168 pass # Not a kuryr deployment
169 return inventory
170
171
172 def _get_stack_outputs(cloud_client):
173 """Returns a dictionary with the stack outputs"""
174 cluster_name = os.getenv('OPENSHIFT_CLUSTER', 'openshift-cluster')
175
176 stack = cloud_client.get_stack(cluster_name)
177 if stack is None or stack['stack_status'] not in (
178 'CREATE_COMPLETE', 'UPDATE_COMPLETE'):
179 return None
180
181 data = {}
182 for output in stack['outputs']:
183 data[output['output_key']] = output['output_value']
184 return data
185
186
187 def _get_kuryr_vars(cloud_client, data):
188 """Returns a dictionary of Kuryr variables resulting of heat stacking"""
189 settings = {}
190 settings['kuryr_openstack_pod_subnet_id'] = data['pod_subnet']
191 settings['kuryr_openstack_worker_nodes_subnet_id'] = data['vm_subnet']
192 settings['kuryr_openstack_service_subnet_id'] = data['service_subnet']
193 settings['kuryr_openstack_pod_sg_id'] = data['pod_access_sg_id']
194 settings['kuryr_openstack_pod_project_id'] = (
195 cloud_client.current_project_id)
196
197 settings['kuryr_openstack_auth_url'] = cloud_client.auth['auth_url']
198 settings['kuryr_openstack_username'] = cloud_client.auth['username']
199 settings['kuryr_openstack_password'] = cloud_client.auth['password']
200 if 'user_domain_id' in cloud_client.auth:
201 settings['kuryr_openstack_user_domain_name'] = (
202 cloud_client.auth['user_domain_id'])
203 else:
204 settings['kuryr_openstack_user_domain_name'] = (
205 cloud_client.auth['user_domain_name'])
206 # FIXME(apuimedo): consolidate kuryr controller credentials into the same
207 # vars the openstack playbook uses.
208 settings['kuryr_openstack_project_id'] = cloud_client.current_project_id
209 if 'project_domain_id' in cloud_client.auth:
210 settings['kuryr_openstack_project_domain_name'] = (
211 cloud_client.auth['project_domain_id'])
212 else:
213 settings['kuryr_openstack_project_domain_name'] = (
214 cloud_client.auth['project_domain_name'])
215 return settings
216
217
218 if __name__ == '__main__':
219 print(json.dumps(build_inventory(), indent=4, sort_keys=True))
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/playbooks/openstack/inventory.py b/playbooks/openstack/inventory.py
--- a/playbooks/openstack/inventory.py
+++ b/playbooks/openstack/inventory.py
@@ -13,6 +13,7 @@
import json
import os
+from keystoneauth1.exceptions.catalog import EndpointNotFound
import shade
@@ -141,9 +142,14 @@
inventory['_meta'] = {'hostvars': {}}
+ # Some clouds don't have Cinder. That's okay:
+ try:
+ volumes = cloud.list_volumes()
+ except EndpointNotFound:
+ volumes = []
+
# cinder volumes used for docker storage
- docker_storage_mountpoints = get_docker_storage_mountpoints(
- cloud.list_volumes())
+ docker_storage_mountpoints = get_docker_storage_mountpoints(volumes)
for server in cluster_hosts:
inventory['_meta']['hostvars'][server.name] = _get_hostvars(
server,
| {"golden_diff": "diff --git a/playbooks/openstack/inventory.py b/playbooks/openstack/inventory.py\n--- a/playbooks/openstack/inventory.py\n+++ b/playbooks/openstack/inventory.py\n@@ -13,6 +13,7 @@\n import json\n import os\n \n+from keystoneauth1.exceptions.catalog import EndpointNotFound\n import shade\n \n \n@@ -141,9 +142,14 @@\n \n inventory['_meta'] = {'hostvars': {}}\n \n+ # Some clouds don't have Cinder. That's okay:\n+ try:\n+ volumes = cloud.list_volumes()\n+ except EndpointNotFound:\n+ volumes = []\n+\n # cinder volumes used for docker storage\n- docker_storage_mountpoints = get_docker_storage_mountpoints(\n- cloud.list_volumes())\n+ docker_storage_mountpoints = get_docker_storage_mountpoints(volumes)\n for server in cluster_hosts:\n inventory['_meta']['hostvars'][server.name] = _get_hostvars(\n server,\n", "issue": "Any chance of getting this playbook to work without cinder ? \n#### Description\r\n\r\nProvide a brief description of your issue here. For example:\r\n\r\nWe have access to an openstack without cinder. Any chance of getting this playbook to work without cinder ? \r\n\r\n\r\n\r\n##### Version\r\n\r\nPlease put the following version information in the code block\r\nindicated below.\r\n\r\n* Your ansible version per `ansible --version`\r\n\r\nIf you're operating from a **git clone**:\r\n\r\n* The output of `git describe`\r\n\r\n\r\nIf you're running from playbooks installed via RPM or\r\n`atomic-openshift-utils`\r\n\r\n* The output of `rpm -q atomic-openshift-utils openshift-ansible`\r\n\r\nPlace the output between the code block below:\r\n\r\n```\r\n$ ansible --version\r\nansible 2.5.0\r\n config file = /home/arthur/local/openshift/ansible.cfg\r\n configured module search path = [u'/home/arthur/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/arthur/.virtualenvs/ansible/local/lib/python2.7/site-packages/ansible\r\n executable location = /home/arthur/.virtualenvs/ansible/bin/ansible\r\n python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]\r\n\r\n$ git describe\r\nopenshift-ansible-3.10.0-0.15.0-9-gf28aba492\r\n```\r\n\r\n##### Steps To Reproduce\r\n\r\nWhen I saw the following comment in all.yml \r\n```\r\n# If you want to use the VM storage instead of Cinder volumes, set this to `true`.\r\n```\r\nI thought this might be possible but I get the following error when executing the ansible playbook : \r\n\r\n```\r\n$ ansible-playbook [snip]\r\n[snip]\r\n in endpoint_data_for raise\r\nexceptions.EndpointNotFound(msg) keystoneauth1.exceptions.catalog.EndpointNotFound: public\r\nendpoint for volumev2 service in RegionOne region not found\r\n[snip]\r\n$ openstack volume list\r\npublic endpoint for volumev2 service in RegionOne region not found\r\n\r\n```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nThis is an Ansible dynamic inventory for OpenStack.\n\nIt requires your OpenStack credentials to be set in clouds.yaml or your shell\nenvironment.\n\n\"\"\"\n\nfrom __future__ import print_function\n\nfrom collections import Mapping\nimport json\nimport os\n\nimport shade\n\n\ndef base_openshift_inventory(cluster_hosts):\n '''Set the base openshift inventory.'''\n inventory = {}\n\n masters = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'master']\n\n etcd = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'etcd']\n if not etcd:\n etcd = masters\n\n infra_hosts = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'node' and\n server.metadata['sub-host-type'] == 'infra']\n\n app = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'node' and\n server.metadata['sub-host-type'] == 'app']\n\n cns = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'cns']\n\n nodes = list(set(masters + infra_hosts + app + cns))\n\n dns = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'dns']\n\n load_balancers = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'lb']\n\n osev3 = list(set(nodes + etcd + load_balancers))\n\n inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}\n inventory['OSEv3'] = {'hosts': osev3}\n inventory['masters'] = {'hosts': masters}\n inventory['etcd'] = {'hosts': etcd}\n inventory['nodes'] = {'hosts': nodes}\n inventory['infra_hosts'] = {'hosts': infra_hosts}\n inventory['app'] = {'hosts': app}\n inventory['glusterfs'] = {'hosts': cns}\n inventory['dns'] = {'hosts': dns}\n inventory['lb'] = {'hosts': load_balancers}\n inventory['localhost'] = {'ansible_connection': 'local'}\n\n return inventory\n\n\ndef get_docker_storage_mountpoints(volumes):\n '''Check volumes to see if they're being used for docker storage'''\n docker_storage_mountpoints = {}\n for volume in volumes:\n if volume.metadata.get('purpose') == \"openshift_docker_storage\":\n for attachment in volume.attachments:\n if attachment.server_id in docker_storage_mountpoints:\n docker_storage_mountpoints[attachment.server_id].append(attachment.device)\n else:\n docker_storage_mountpoints[attachment.server_id] = [attachment.device]\n return docker_storage_mountpoints\n\n\ndef _get_hostvars(server, docker_storage_mountpoints):\n ssh_ip_address = server.public_v4 or server.private_v4\n hostvars = {\n 'ansible_host': ssh_ip_address\n }\n\n public_v4 = server.public_v4 or server.private_v4\n if public_v4:\n hostvars['public_v4'] = server.public_v4\n hostvars['openshift_public_ip'] = server.public_v4\n # TODO(shadower): what about multiple networks?\n if server.private_v4:\n hostvars['private_v4'] = server.private_v4\n hostvars['openshift_ip'] = server.private_v4\n\n # NOTE(shadower): Yes, we set both hostname and IP to the private\n # IP address for each node. OpenStack doesn't resolve nodes by\n # name at all, so using a hostname here would require an internal\n # DNS which would complicate the setup and potentially introduce\n # performance issues.\n hostvars['openshift_hostname'] = server.metadata.get(\n 'openshift_hostname', server.private_v4)\n hostvars['openshift_public_hostname'] = server.name\n\n if server.metadata['host-type'] == 'cns':\n hostvars['glusterfs_devices'] = ['/dev/nvme0n1']\n\n node_labels = server.metadata.get('node_labels')\n # NOTE(shadower): the node_labels value must be a dict not string\n if not isinstance(node_labels, Mapping):\n node_labels = json.loads(node_labels)\n\n if node_labels:\n hostvars['openshift_node_labels'] = node_labels\n\n # check for attached docker storage volumes\n if 'os-extended-volumes:volumes_attached' in server:\n if server.id in docker_storage_mountpoints:\n hostvars['docker_storage_mountpoints'] = ' '.join(\n docker_storage_mountpoints[server.id])\n return hostvars\n\n\ndef build_inventory():\n '''Build the dynamic inventory.'''\n cloud = shade.openstack_cloud()\n\n # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER`\n # environment variable.\n cluster_hosts = [\n server for server in cloud.list_servers()\n if 'metadata' in server and 'clusterid' in server.metadata]\n\n inventory = base_openshift_inventory(cluster_hosts)\n\n for server in cluster_hosts:\n if 'group' in server.metadata:\n group = server.metadata.get('group')\n if group not in inventory:\n inventory[group] = {'hosts': []}\n inventory[group]['hosts'].append(server.name)\n\n inventory['_meta'] = {'hostvars': {}}\n\n # cinder volumes used for docker storage\n docker_storage_mountpoints = get_docker_storage_mountpoints(\n cloud.list_volumes())\n for server in cluster_hosts:\n inventory['_meta']['hostvars'][server.name] = _get_hostvars(\n server,\n docker_storage_mountpoints)\n\n stout = _get_stack_outputs(cloud)\n if stout is not None:\n try:\n inventory['localhost'].update({\n 'openshift_openstack_api_lb_provider':\n stout['api_lb_provider'],\n 'openshift_openstack_api_lb_port_id':\n stout['api_lb_vip_port_id'],\n 'openshift_openstack_api_lb_sg_id':\n stout['api_lb_sg_id']})\n except KeyError:\n pass # Not an API load balanced deployment\n\n try:\n inventory['OSEv3']['vars'] = _get_kuryr_vars(cloud, stout)\n except KeyError:\n pass # Not a kuryr deployment\n return inventory\n\n\ndef _get_stack_outputs(cloud_client):\n \"\"\"Returns a dictionary with the stack outputs\"\"\"\n cluster_name = os.getenv('OPENSHIFT_CLUSTER', 'openshift-cluster')\n\n stack = cloud_client.get_stack(cluster_name)\n if stack is None or stack['stack_status'] not in (\n 'CREATE_COMPLETE', 'UPDATE_COMPLETE'):\n return None\n\n data = {}\n for output in stack['outputs']:\n data[output['output_key']] = output['output_value']\n return data\n\n\ndef _get_kuryr_vars(cloud_client, data):\n \"\"\"Returns a dictionary of Kuryr variables resulting of heat stacking\"\"\"\n settings = {}\n settings['kuryr_openstack_pod_subnet_id'] = data['pod_subnet']\n settings['kuryr_openstack_worker_nodes_subnet_id'] = data['vm_subnet']\n settings['kuryr_openstack_service_subnet_id'] = data['service_subnet']\n settings['kuryr_openstack_pod_sg_id'] = data['pod_access_sg_id']\n settings['kuryr_openstack_pod_project_id'] = (\n cloud_client.current_project_id)\n\n settings['kuryr_openstack_auth_url'] = cloud_client.auth['auth_url']\n settings['kuryr_openstack_username'] = cloud_client.auth['username']\n settings['kuryr_openstack_password'] = cloud_client.auth['password']\n if 'user_domain_id' in cloud_client.auth:\n settings['kuryr_openstack_user_domain_name'] = (\n cloud_client.auth['user_domain_id'])\n else:\n settings['kuryr_openstack_user_domain_name'] = (\n cloud_client.auth['user_domain_name'])\n # FIXME(apuimedo): consolidate kuryr controller credentials into the same\n # vars the openstack playbook uses.\n settings['kuryr_openstack_project_id'] = cloud_client.current_project_id\n if 'project_domain_id' in cloud_client.auth:\n settings['kuryr_openstack_project_domain_name'] = (\n cloud_client.auth['project_domain_id'])\n else:\n settings['kuryr_openstack_project_domain_name'] = (\n cloud_client.auth['project_domain_name'])\n return settings\n\n\nif __name__ == '__main__':\n print(json.dumps(build_inventory(), indent=4, sort_keys=True))\n", "path": "playbooks/openstack/inventory.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nThis is an Ansible dynamic inventory for OpenStack.\n\nIt requires your OpenStack credentials to be set in clouds.yaml or your shell\nenvironment.\n\n\"\"\"\n\nfrom __future__ import print_function\n\nfrom collections import Mapping\nimport json\nimport os\n\nfrom keystoneauth1.exceptions.catalog import EndpointNotFound\nimport shade\n\n\ndef base_openshift_inventory(cluster_hosts):\n '''Set the base openshift inventory.'''\n inventory = {}\n\n masters = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'master']\n\n etcd = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'etcd']\n if not etcd:\n etcd = masters\n\n infra_hosts = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'node' and\n server.metadata['sub-host-type'] == 'infra']\n\n app = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'node' and\n server.metadata['sub-host-type'] == 'app']\n\n cns = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'cns']\n\n nodes = list(set(masters + infra_hosts + app + cns))\n\n dns = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'dns']\n\n load_balancers = [server.name for server in cluster_hosts\n if server.metadata['host-type'] == 'lb']\n\n osev3 = list(set(nodes + etcd + load_balancers))\n\n inventory['cluster_hosts'] = {'hosts': [s.name for s in cluster_hosts]}\n inventory['OSEv3'] = {'hosts': osev3}\n inventory['masters'] = {'hosts': masters}\n inventory['etcd'] = {'hosts': etcd}\n inventory['nodes'] = {'hosts': nodes}\n inventory['infra_hosts'] = {'hosts': infra_hosts}\n inventory['app'] = {'hosts': app}\n inventory['glusterfs'] = {'hosts': cns}\n inventory['dns'] = {'hosts': dns}\n inventory['lb'] = {'hosts': load_balancers}\n inventory['localhost'] = {'ansible_connection': 'local'}\n\n return inventory\n\n\ndef get_docker_storage_mountpoints(volumes):\n '''Check volumes to see if they're being used for docker storage'''\n docker_storage_mountpoints = {}\n for volume in volumes:\n if volume.metadata.get('purpose') == \"openshift_docker_storage\":\n for attachment in volume.attachments:\n if attachment.server_id in docker_storage_mountpoints:\n docker_storage_mountpoints[attachment.server_id].append(attachment.device)\n else:\n docker_storage_mountpoints[attachment.server_id] = [attachment.device]\n return docker_storage_mountpoints\n\n\ndef _get_hostvars(server, docker_storage_mountpoints):\n ssh_ip_address = server.public_v4 or server.private_v4\n hostvars = {\n 'ansible_host': ssh_ip_address\n }\n\n public_v4 = server.public_v4 or server.private_v4\n if public_v4:\n hostvars['public_v4'] = server.public_v4\n hostvars['openshift_public_ip'] = server.public_v4\n # TODO(shadower): what about multiple networks?\n if server.private_v4:\n hostvars['private_v4'] = server.private_v4\n hostvars['openshift_ip'] = server.private_v4\n\n # NOTE(shadower): Yes, we set both hostname and IP to the private\n # IP address for each node. OpenStack doesn't resolve nodes by\n # name at all, so using a hostname here would require an internal\n # DNS which would complicate the setup and potentially introduce\n # performance issues.\n hostvars['openshift_hostname'] = server.metadata.get(\n 'openshift_hostname', server.private_v4)\n hostvars['openshift_public_hostname'] = server.name\n\n if server.metadata['host-type'] == 'cns':\n hostvars['glusterfs_devices'] = ['/dev/nvme0n1']\n\n node_labels = server.metadata.get('node_labels')\n # NOTE(shadower): the node_labels value must be a dict not string\n if not isinstance(node_labels, Mapping):\n node_labels = json.loads(node_labels)\n\n if node_labels:\n hostvars['openshift_node_labels'] = node_labels\n\n # check for attached docker storage volumes\n if 'os-extended-volumes:volumes_attached' in server:\n if server.id in docker_storage_mountpoints:\n hostvars['docker_storage_mountpoints'] = ' '.join(\n docker_storage_mountpoints[server.id])\n return hostvars\n\n\ndef build_inventory():\n '''Build the dynamic inventory.'''\n cloud = shade.openstack_cloud()\n\n # TODO(shadower): filter the servers based on the `OPENSHIFT_CLUSTER`\n # environment variable.\n cluster_hosts = [\n server for server in cloud.list_servers()\n if 'metadata' in server and 'clusterid' in server.metadata]\n\n inventory = base_openshift_inventory(cluster_hosts)\n\n for server in cluster_hosts:\n if 'group' in server.metadata:\n group = server.metadata.get('group')\n if group not in inventory:\n inventory[group] = {'hosts': []}\n inventory[group]['hosts'].append(server.name)\n\n inventory['_meta'] = {'hostvars': {}}\n\n # Some clouds don't have Cinder. That's okay:\n try:\n volumes = cloud.list_volumes()\n except EndpointNotFound:\n volumes = []\n\n # cinder volumes used for docker storage\n docker_storage_mountpoints = get_docker_storage_mountpoints(volumes)\n for server in cluster_hosts:\n inventory['_meta']['hostvars'][server.name] = _get_hostvars(\n server,\n docker_storage_mountpoints)\n\n stout = _get_stack_outputs(cloud)\n if stout is not None:\n try:\n inventory['localhost'].update({\n 'openshift_openstack_api_lb_provider':\n stout['api_lb_provider'],\n 'openshift_openstack_api_lb_port_id':\n stout['api_lb_vip_port_id'],\n 'openshift_openstack_api_lb_sg_id':\n stout['api_lb_sg_id']})\n except KeyError:\n pass # Not an API load balanced deployment\n\n try:\n inventory['OSEv3']['vars'] = _get_kuryr_vars(cloud, stout)\n except KeyError:\n pass # Not a kuryr deployment\n return inventory\n\n\ndef _get_stack_outputs(cloud_client):\n \"\"\"Returns a dictionary with the stack outputs\"\"\"\n cluster_name = os.getenv('OPENSHIFT_CLUSTER', 'openshift-cluster')\n\n stack = cloud_client.get_stack(cluster_name)\n if stack is None or stack['stack_status'] not in (\n 'CREATE_COMPLETE', 'UPDATE_COMPLETE'):\n return None\n\n data = {}\n for output in stack['outputs']:\n data[output['output_key']] = output['output_value']\n return data\n\n\ndef _get_kuryr_vars(cloud_client, data):\n \"\"\"Returns a dictionary of Kuryr variables resulting of heat stacking\"\"\"\n settings = {}\n settings['kuryr_openstack_pod_subnet_id'] = data['pod_subnet']\n settings['kuryr_openstack_worker_nodes_subnet_id'] = data['vm_subnet']\n settings['kuryr_openstack_service_subnet_id'] = data['service_subnet']\n settings['kuryr_openstack_pod_sg_id'] = data['pod_access_sg_id']\n settings['kuryr_openstack_pod_project_id'] = (\n cloud_client.current_project_id)\n\n settings['kuryr_openstack_auth_url'] = cloud_client.auth['auth_url']\n settings['kuryr_openstack_username'] = cloud_client.auth['username']\n settings['kuryr_openstack_password'] = cloud_client.auth['password']\n if 'user_domain_id' in cloud_client.auth:\n settings['kuryr_openstack_user_domain_name'] = (\n cloud_client.auth['user_domain_id'])\n else:\n settings['kuryr_openstack_user_domain_name'] = (\n cloud_client.auth['user_domain_name'])\n # FIXME(apuimedo): consolidate kuryr controller credentials into the same\n # vars the openstack playbook uses.\n settings['kuryr_openstack_project_id'] = cloud_client.current_project_id\n if 'project_domain_id' in cloud_client.auth:\n settings['kuryr_openstack_project_domain_name'] = (\n cloud_client.auth['project_domain_id'])\n else:\n settings['kuryr_openstack_project_domain_name'] = (\n cloud_client.auth['project_domain_name'])\n return settings\n\n\nif __name__ == '__main__':\n print(json.dumps(build_inventory(), indent=4, sort_keys=True))\n", "path": "playbooks/openstack/inventory.py"}]} | 3,154 | 216 |
gh_patches_debug_12407 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-6079 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Il est possible de choisir un pseudo invalide
**Description du bug**
Il est possible de choisir un pseudo un peu farfelu comme par exemple `https://viki53.eu` qui est dans certains cas invalide : la fonction `reverse_lazy('tutorial:find-tutorial', args=(profile.user.username,))` qui permet de retrouver l'URL `'tutoriels/voir/(?P<username>[^/]+)/$'` retourne une erreur `NoReverseMatch`.
**Comment reproduire ?**
La liste des étapes qui permet de reproduire le bug :
1. Se renommer en `https://viki53.eu`
2. Aller sur son profil et constater l'erreur interne
**Comportement attendu**
Aucune erreur interne.
**Solution possible**
Il serait possible d'ajouter une petite vérification lors du changement de pseudo pour refuser les pseudos invalides :
```py
try:
reverse_lazy('tutorial:find-tutorial', args=(profile.user.username,))
except NoReverseMatch:
# Refuser le pseudo
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/member/validators.py`
Content:
```
1 from django.contrib.auth.models import User
2 from django.core.exceptions import ValidationError
3 from django.core.validators import EmailValidator
4 from django.utils.encoding import force_str
5 from django.utils.translation import gettext_lazy as _
6
7 from zds.utils.misc import contains_utf8mb4
8 from zds.member.models import BannedEmailProvider, Profile
9
10
11 def validate_not_empty(value):
12 """
13 Fields cannot be empty or only contain spaces.
14
15 :param value: value to validate (str or None)
16 :return:
17 """
18 if value is None or not value.strip():
19 raise ValidationError(_("Le champs ne peut être vide"))
20
21
22 class ZdSEmailValidator(EmailValidator):
23 """
24 Based on https://docs.djangoproject.com/en/1.8/_modules/django/core/validators/#EmailValidator
25 Changed :
26 - check if provider is not if blacklisted
27 - check if email is not used by another user
28 - remove whitelist check
29 - add custom errors and translate them into French
30 """
31
32 message = _("Utilisez une adresse de courriel valide.")
33
34 def __call__(self, value, check_username_available=True):
35 value = force_str(value)
36
37 if not value or "@" not in value:
38 raise ValidationError(self.message, code=self.code)
39
40 user_part, domain_part = value.rsplit("@", 1)
41
42 if not self.user_regex.match(user_part) or contains_utf8mb4(user_part):
43 raise ValidationError(self.message, code=self.code)
44
45 # check if provider is blacklisted
46 blacklist = BannedEmailProvider.objects.values_list("provider", flat=True)
47 for provider in blacklist:
48 if f"@{provider}" in value.lower():
49 raise ValidationError(_("Ce fournisseur ne peut pas être utilisé."), code=self.code)
50
51 # check if email is used by another user
52 user_count = User.objects.filter(email=value).count()
53 if check_username_available and user_count > 0:
54 raise ValidationError(_("Cette adresse courriel est déjà utilisée"), code=self.code)
55 # check if email exists in database
56 elif not check_username_available and user_count == 0:
57 raise ValidationError(_("Cette adresse courriel n'existe pas"), code=self.code)
58
59 if domain_part and not self.validate_domain_part(domain_part):
60 # Try for possible IDN domain-part
61 try:
62 domain_part = domain_part.encode("idna").decode("ascii")
63 if self.validate_domain_part(domain_part):
64 return
65 except UnicodeError:
66 pass
67 raise ValidationError(self.message, code=self.code)
68
69
70 validate_zds_email = ZdSEmailValidator()
71
72
73 def validate_zds_username(value, check_username_available=True):
74 """
75 Check if username is used by another user
76
77 :param value: value to validate (str or None)
78 :return:
79 """
80 msg = None
81 user_count = User.objects.filter(username=value).count()
82 skeleton_user_count = Profile.objects.filter(username_skeleton=Profile.find_username_skeleton(value)).count()
83 if "," in value:
84 msg = _("Le nom d'utilisateur ne peut contenir de virgules")
85 elif contains_utf8mb4(value):
86 msg = _("Le nom d'utilisateur ne peut pas contenir des caractères utf8mb4")
87 elif check_username_available and user_count > 0:
88 msg = _("Ce nom d'utilisateur est déjà utilisé")
89 elif check_username_available and skeleton_user_count > 0:
90 msg = _("Un nom d'utilisateur visuellement proche du votre existe déjà")
91 elif not check_username_available and user_count == 0:
92 msg = _("Ce nom d'utilisateur n'existe pas")
93 if msg is not None:
94 raise ValidationError(msg)
95
96
97 def validate_raw_zds_username(data):
98 """
99 Check if raw username hasn't space on left or right
100 """
101 msg = None
102 username = data.get("username", None)
103 if username is None:
104 msg = _("Le nom d'utilisateur n'est pas fourni")
105 elif username != username.strip():
106 msg = _("Le nom d'utilisateur ne peut commencer ou finir par des espaces")
107
108 if msg is not None:
109 raise ValidationError(msg)
110
111
112 def validate_zds_password(value):
113 """
114
115 :param value:
116 :return:
117 """
118 if contains_utf8mb4(value):
119 raise ValidationError(_("Le mot de passe ne peut pas contenir des caractères utf8mb4"))
120
121
122 def validate_passwords(
123 cleaned_data, password_label="password", password_confirm_label="password_confirm", username=None
124 ):
125 """
126 Chek if cleaned_data['password'] == cleaned_data['password_confirm'] and password is not username.
127 :param cleaned_data:
128 :param password_label:
129 :param password_confirm_label:
130 :return:
131 """
132
133 password = cleaned_data.get(password_label)
134 password_confirm = cleaned_data.get(password_confirm_label)
135 msg = None
136
137 if username is None:
138 username = cleaned_data.get("username")
139
140 if not password_confirm == password:
141 msg = _("Les mots de passe sont différents")
142
143 if password_label in cleaned_data:
144 del cleaned_data[password_label]
145
146 if password_confirm_label in cleaned_data:
147 del cleaned_data[password_confirm_label]
148
149 if username is not None:
150 # Check that password != username
151 if password == username:
152 msg = _("Le mot de passe doit être différent du pseudo")
153 if password_label in cleaned_data:
154 del cleaned_data[password_label]
155 if password_confirm_label in cleaned_data:
156 del cleaned_data[password_confirm_label]
157
158 if msg is not None:
159 raise ValidationError(msg)
160
161 return cleaned_data
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/member/validators.py b/zds/member/validators.py
--- a/zds/member/validators.py
+++ b/zds/member/validators.py
@@ -82,6 +82,8 @@
skeleton_user_count = Profile.objects.filter(username_skeleton=Profile.find_username_skeleton(value)).count()
if "," in value:
msg = _("Le nom d'utilisateur ne peut contenir de virgules")
+ if "/" in value:
+ msg = _("Le nom d'utilisateur ne peut contenir de barres obliques")
elif contains_utf8mb4(value):
msg = _("Le nom d'utilisateur ne peut pas contenir des caractères utf8mb4")
elif check_username_available and user_count > 0:
| {"golden_diff": "diff --git a/zds/member/validators.py b/zds/member/validators.py\n--- a/zds/member/validators.py\n+++ b/zds/member/validators.py\n@@ -82,6 +82,8 @@\n skeleton_user_count = Profile.objects.filter(username_skeleton=Profile.find_username_skeleton(value)).count()\n if \",\" in value:\n msg = _(\"Le nom d'utilisateur ne peut contenir de virgules\")\n+ if \"/\" in value:\n+ msg = _(\"Le nom d'utilisateur ne peut contenir de barres obliques\")\n elif contains_utf8mb4(value):\n msg = _(\"Le nom d'utilisateur ne peut pas contenir des caract\u00e8res utf8mb4\")\n elif check_username_available and user_count > 0:\n", "issue": "Il est possible de choisir un pseudo invalide\n**Description du bug**\r\n\r\nIl est possible de choisir un pseudo un peu farfelu comme par exemple `https://viki53.eu` qui est dans certains cas invalide : la fonction `reverse_lazy('tutorial:find-tutorial', args=(profile.user.username,))` qui permet de retrouver l'URL `'tutoriels/voir/(?P<username>[^/]+)/$'` retourne une erreur `NoReverseMatch`.\r\n\r\n**Comment reproduire ?**\r\n\r\nLa liste des \u00e9tapes qui permet de reproduire le bug :\r\n\r\n1. Se renommer en `https://viki53.eu`\r\n2. Aller sur son profil et constater l'erreur interne\r\n\r\n**Comportement attendu**\r\n\r\nAucune erreur interne.\r\n\r\n**Solution possible**\r\n\r\nIl serait possible d'ajouter une petite v\u00e9rification lors du changement de pseudo pour refuser les pseudos invalides : \r\n\r\n```py\r\ntry:\r\n reverse_lazy('tutorial:find-tutorial', args=(profile.user.username,))\r\nexcept NoReverseMatch:\r\n # Refuser le pseudo\r\n```\n", "before_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.utils.encoding import force_str\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.utils.misc import contains_utf8mb4\nfrom zds.member.models import BannedEmailProvider, Profile\n\n\ndef validate_not_empty(value):\n \"\"\"\n Fields cannot be empty or only contain spaces.\n\n :param value: value to validate (str or None)\n :return:\n \"\"\"\n if value is None or not value.strip():\n raise ValidationError(_(\"Le champs ne peut \u00eatre vide\"))\n\n\nclass ZdSEmailValidator(EmailValidator):\n \"\"\"\n Based on https://docs.djangoproject.com/en/1.8/_modules/django/core/validators/#EmailValidator\n Changed :\n - check if provider is not if blacklisted\n - check if email is not used by another user\n - remove whitelist check\n - add custom errors and translate them into French\n \"\"\"\n\n message = _(\"Utilisez une adresse de courriel valide.\")\n\n def __call__(self, value, check_username_available=True):\n value = force_str(value)\n\n if not value or \"@\" not in value:\n raise ValidationError(self.message, code=self.code)\n\n user_part, domain_part = value.rsplit(\"@\", 1)\n\n if not self.user_regex.match(user_part) or contains_utf8mb4(user_part):\n raise ValidationError(self.message, code=self.code)\n\n # check if provider is blacklisted\n blacklist = BannedEmailProvider.objects.values_list(\"provider\", flat=True)\n for provider in blacklist:\n if f\"@{provider}\" in value.lower():\n raise ValidationError(_(\"Ce fournisseur ne peut pas \u00eatre utilis\u00e9.\"), code=self.code)\n\n # check if email is used by another user\n user_count = User.objects.filter(email=value).count()\n if check_username_available and user_count > 0:\n raise ValidationError(_(\"Cette adresse courriel est d\u00e9j\u00e0 utilis\u00e9e\"), code=self.code)\n # check if email exists in database\n elif not check_username_available and user_count == 0:\n raise ValidationError(_(\"Cette adresse courriel n'existe pas\"), code=self.code)\n\n if domain_part and not self.validate_domain_part(domain_part):\n # Try for possible IDN domain-part\n try:\n domain_part = domain_part.encode(\"idna\").decode(\"ascii\")\n if self.validate_domain_part(domain_part):\n return\n except UnicodeError:\n pass\n raise ValidationError(self.message, code=self.code)\n\n\nvalidate_zds_email = ZdSEmailValidator()\n\n\ndef validate_zds_username(value, check_username_available=True):\n \"\"\"\n Check if username is used by another user\n\n :param value: value to validate (str or None)\n :return:\n \"\"\"\n msg = None\n user_count = User.objects.filter(username=value).count()\n skeleton_user_count = Profile.objects.filter(username_skeleton=Profile.find_username_skeleton(value)).count()\n if \",\" in value:\n msg = _(\"Le nom d'utilisateur ne peut contenir de virgules\")\n elif contains_utf8mb4(value):\n msg = _(\"Le nom d'utilisateur ne peut pas contenir des caract\u00e8res utf8mb4\")\n elif check_username_available and user_count > 0:\n msg = _(\"Ce nom d'utilisateur est d\u00e9j\u00e0 utilis\u00e9\")\n elif check_username_available and skeleton_user_count > 0:\n msg = _(\"Un nom d'utilisateur visuellement proche du votre existe d\u00e9j\u00e0\")\n elif not check_username_available and user_count == 0:\n msg = _(\"Ce nom d'utilisateur n'existe pas\")\n if msg is not None:\n raise ValidationError(msg)\n\n\ndef validate_raw_zds_username(data):\n \"\"\"\n Check if raw username hasn't space on left or right\n \"\"\"\n msg = None\n username = data.get(\"username\", None)\n if username is None:\n msg = _(\"Le nom d'utilisateur n'est pas fourni\")\n elif username != username.strip():\n msg = _(\"Le nom d'utilisateur ne peut commencer ou finir par des espaces\")\n\n if msg is not None:\n raise ValidationError(msg)\n\n\ndef validate_zds_password(value):\n \"\"\"\n\n :param value:\n :return:\n \"\"\"\n if contains_utf8mb4(value):\n raise ValidationError(_(\"Le mot de passe ne peut pas contenir des caract\u00e8res utf8mb4\"))\n\n\ndef validate_passwords(\n cleaned_data, password_label=\"password\", password_confirm_label=\"password_confirm\", username=None\n):\n \"\"\"\n Chek if cleaned_data['password'] == cleaned_data['password_confirm'] and password is not username.\n :param cleaned_data:\n :param password_label:\n :param password_confirm_label:\n :return:\n \"\"\"\n\n password = cleaned_data.get(password_label)\n password_confirm = cleaned_data.get(password_confirm_label)\n msg = None\n\n if username is None:\n username = cleaned_data.get(\"username\")\n\n if not password_confirm == password:\n msg = _(\"Les mots de passe sont diff\u00e9rents\")\n\n if password_label in cleaned_data:\n del cleaned_data[password_label]\n\n if password_confirm_label in cleaned_data:\n del cleaned_data[password_confirm_label]\n\n if username is not None:\n # Check that password != username\n if password == username:\n msg = _(\"Le mot de passe doit \u00eatre diff\u00e9rent du pseudo\")\n if password_label in cleaned_data:\n del cleaned_data[password_label]\n if password_confirm_label in cleaned_data:\n del cleaned_data[password_confirm_label]\n\n if msg is not None:\n raise ValidationError(msg)\n\n return cleaned_data\n", "path": "zds/member/validators.py"}], "after_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.utils.encoding import force_str\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.utils.misc import contains_utf8mb4\nfrom zds.member.models import BannedEmailProvider, Profile\n\n\ndef validate_not_empty(value):\n \"\"\"\n Fields cannot be empty or only contain spaces.\n\n :param value: value to validate (str or None)\n :return:\n \"\"\"\n if value is None or not value.strip():\n raise ValidationError(_(\"Le champs ne peut \u00eatre vide\"))\n\n\nclass ZdSEmailValidator(EmailValidator):\n \"\"\"\n Based on https://docs.djangoproject.com/en/1.8/_modules/django/core/validators/#EmailValidator\n Changed :\n - check if provider is not if blacklisted\n - check if email is not used by another user\n - remove whitelist check\n - add custom errors and translate them into French\n \"\"\"\n\n message = _(\"Utilisez une adresse de courriel valide.\")\n\n def __call__(self, value, check_username_available=True):\n value = force_str(value)\n\n if not value or \"@\" not in value:\n raise ValidationError(self.message, code=self.code)\n\n user_part, domain_part = value.rsplit(\"@\", 1)\n\n if not self.user_regex.match(user_part) or contains_utf8mb4(user_part):\n raise ValidationError(self.message, code=self.code)\n\n # check if provider is blacklisted\n blacklist = BannedEmailProvider.objects.values_list(\"provider\", flat=True)\n for provider in blacklist:\n if f\"@{provider}\" in value.lower():\n raise ValidationError(_(\"Ce fournisseur ne peut pas \u00eatre utilis\u00e9.\"), code=self.code)\n\n # check if email is used by another user\n user_count = User.objects.filter(email=value).count()\n if check_username_available and user_count > 0:\n raise ValidationError(_(\"Cette adresse courriel est d\u00e9j\u00e0 utilis\u00e9e\"), code=self.code)\n # check if email exists in database\n elif not check_username_available and user_count == 0:\n raise ValidationError(_(\"Cette adresse courriel n'existe pas\"), code=self.code)\n\n if domain_part and not self.validate_domain_part(domain_part):\n # Try for possible IDN domain-part\n try:\n domain_part = domain_part.encode(\"idna\").decode(\"ascii\")\n if self.validate_domain_part(domain_part):\n return\n except UnicodeError:\n pass\n raise ValidationError(self.message, code=self.code)\n\n\nvalidate_zds_email = ZdSEmailValidator()\n\n\ndef validate_zds_username(value, check_username_available=True):\n \"\"\"\n Check if username is used by another user\n\n :param value: value to validate (str or None)\n :return:\n \"\"\"\n msg = None\n user_count = User.objects.filter(username=value).count()\n skeleton_user_count = Profile.objects.filter(username_skeleton=Profile.find_username_skeleton(value)).count()\n if \",\" in value:\n msg = _(\"Le nom d'utilisateur ne peut contenir de virgules\")\n if \"/\" in value:\n msg = _(\"Le nom d'utilisateur ne peut contenir de barres obliques\")\n elif contains_utf8mb4(value):\n msg = _(\"Le nom d'utilisateur ne peut pas contenir des caract\u00e8res utf8mb4\")\n elif check_username_available and user_count > 0:\n msg = _(\"Ce nom d'utilisateur est d\u00e9j\u00e0 utilis\u00e9\")\n elif check_username_available and skeleton_user_count > 0:\n msg = _(\"Un nom d'utilisateur visuellement proche du votre existe d\u00e9j\u00e0\")\n elif not check_username_available and user_count == 0:\n msg = _(\"Ce nom d'utilisateur n'existe pas\")\n if msg is not None:\n raise ValidationError(msg)\n\n\ndef validate_raw_zds_username(data):\n \"\"\"\n Check if raw username hasn't space on left or right\n \"\"\"\n msg = None\n username = data.get(\"username\", None)\n if username is None:\n msg = _(\"Le nom d'utilisateur n'est pas fourni\")\n elif username != username.strip():\n msg = _(\"Le nom d'utilisateur ne peut commencer ou finir par des espaces\")\n\n if msg is not None:\n raise ValidationError(msg)\n\n\ndef validate_zds_password(value):\n \"\"\"\n\n :param value:\n :return:\n \"\"\"\n if contains_utf8mb4(value):\n raise ValidationError(_(\"Le mot de passe ne peut pas contenir des caract\u00e8res utf8mb4\"))\n\n\ndef validate_passwords(\n cleaned_data, password_label=\"password\", password_confirm_label=\"password_confirm\", username=None\n):\n \"\"\"\n Chek if cleaned_data['password'] == cleaned_data['password_confirm'] and password is not username.\n :param cleaned_data:\n :param password_label:\n :param password_confirm_label:\n :return:\n \"\"\"\n\n password = cleaned_data.get(password_label)\n password_confirm = cleaned_data.get(password_confirm_label)\n msg = None\n\n if username is None:\n username = cleaned_data.get(\"username\")\n\n if not password_confirm == password:\n msg = _(\"Les mots de passe sont diff\u00e9rents\")\n\n if password_label in cleaned_data:\n del cleaned_data[password_label]\n\n if password_confirm_label in cleaned_data:\n del cleaned_data[password_confirm_label]\n\n if username is not None:\n # Check that password != username\n if password == username:\n msg = _(\"Le mot de passe doit \u00eatre diff\u00e9rent du pseudo\")\n if password_label in cleaned_data:\n del cleaned_data[password_label]\n if password_confirm_label in cleaned_data:\n del cleaned_data[password_confirm_label]\n\n if msg is not None:\n raise ValidationError(msg)\n\n return cleaned_data\n", "path": "zds/member/validators.py"}]} | 2,076 | 161 |
gh_patches_debug_30552 | rasdani/github-patches | git_diff | CTFd__CTFd-1330 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Email cannot be sent if CTF name has an accent
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 2.3.2
- Operating System: Debian 10.3, Python 3.7.3
- Web Browser and Version: Firefox Nightly, 76.0a1 (2020-04-06)
**What happened?**
After a fresh install of CTFd we set the competition name to "Inté CTF". We configure the email server to use our Postfix relay, and test it with the "forgotten password" feature.
The email never arrives and is bounced by relay servers because the `From` header is invalid.
The actual issue is because the `From` header has the form "CTF name <email address>" (see [here](https://github.com/CTFd/CTFd/blob/master/CTFd/utils/email/smtp.py#L25)). Once encoded in the `msg.as_string()` ([there](https://github.com/CTFd/CTFd/blob/master/CTFd/utils/email/smtp.py#L55)) it is entirely encoded in UTF-8 looking like this `=?utf-8?q?Int=C3=A9_CTF_=3Cctf-noreply=40example=2Ecom=3E?=`, which is invalid against RFC 2407 section 5 ("An 'encoded-word' MUST NOT appear in any portion of an 'addr-spec'"). A correct form would be `=?utf-8?q?Int=C3=A9?= CTF <[email protected]>`.
I have a patch ready but it uses `from email.message import EmailMessage` which is available in Python 3.6+. I'll be happy to open a PR if you confirm me that support for lower Python version can be dropped ^^
<details>
<summary>Patch for `CTFd/utils/email/smtp.py`</summary>
```diff
-from email.mime.text import MIMEText
+from email.message import EmailMessage
# around line 50
- msg = MIMEText(text)
+ msg = EmailMessage()
+ msg.set_content(text)
msg["Subject"] = subject
msg["From"] = mailfrom_addr
msg["To"] = addr
- smtp.sendmail(msg["From"], [msg["To"]], msg.as_string())
+ smtp.send_message(msg)
```
</details>
**How to reproduce your issue**
- setup a new instance of CTFd
- configure the competition name with non-ascii characters
- configure the email server to use a "real" mail server (not Mailgun)
- try to send an email (with the "forgotten password" for example)
- see the mail server logs
<details>
<summary>
You can see the difference between both methods (`MIMEText` and `EmailMessage`) with the following snippet
</summary>
```python
from email.mime.text import MIMEText
msg = MIMEText("This is a message with accents éèçà")
msg["From"] = "René Côti <[email protected]>"
msg["To"] = "[email protected]"
msg.as_string()
from email.message import EmailMessage
msg = EmailMessage()
msg.set_content("This is a message with accents éèçà")
msg["From"] = "René Côti <[email protected]>"
msg["To"] = "[email protected]"
msg.as_string()
```
</details>
**Any associated stack traces or error logs**
```
# Sent to a gmail recipient
opendkim[839]: 278839FE69: can't parse From: header value ' =?utf-8?q?Int=C3=A9_CTF_=3Cctf-noreply=40example=2Ecom=3E?='
postfix/smtp[8076]: 278839FE69: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.206.27]:25, delay=0.75, delays=0.31/0/0.23/0.21, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.206.27] said: 550-5.7.1 [185.132.74.134 14] Messages missing a valid address in From: 550 5.7.1 header, or having no From: header, are not accepted. p64si4801636wmp.124 - gsmtp (in reply to end of DATA command))
# Sent to another recipient, with another relay server
postfix/smtpd[7858]: NOQUEUE: reject: RCPT from mailcube2.domain.fr[193.51.52.6]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown
in virtual mailbox table; from=<> to=<[email protected]> proto=ESMTP helo=<mailcube2.domain.fr>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/email/smtp.py`
Content:
```
1 import smtplib
2 from email.mime.text import MIMEText
3 from socket import timeout
4
5 from CTFd.utils import get_app_config, get_config
6
7
8 def get_smtp(host, port, username=None, password=None, TLS=None, SSL=None, auth=None):
9 if SSL is None:
10 smtp = smtplib.SMTP(host, port, timeout=3)
11 else:
12 smtp = smtplib.SMTP_SSL(host, port, timeout=3)
13
14 if TLS:
15 smtp.starttls()
16
17 if auth:
18 smtp.login(username, password)
19 return smtp
20
21
22 def sendmail(addr, text, subject):
23 ctf_name = get_config("ctf_name")
24 mailfrom_addr = get_config("mailfrom_addr") or get_app_config("MAILFROM_ADDR")
25 mailfrom_addr = "{} <{}>".format(ctf_name, mailfrom_addr)
26
27 data = {
28 "host": get_config("mail_server") or get_app_config("MAIL_SERVER"),
29 "port": int(get_config("mail_port") or get_app_config("MAIL_PORT")),
30 }
31 username = get_config("mail_username") or get_app_config("MAIL_USERNAME")
32 password = get_config("mail_password") or get_app_config("MAIL_PASSWORD")
33 TLS = get_config("mail_tls") or get_app_config("MAIL_TLS")
34 SSL = get_config("mail_ssl") or get_app_config("MAIL_SSL")
35 auth = get_config("mail_useauth") or get_app_config("MAIL_USEAUTH")
36
37 if username:
38 data["username"] = username
39 if password:
40 data["password"] = password
41 if TLS:
42 data["TLS"] = TLS
43 if SSL:
44 data["SSL"] = SSL
45 if auth:
46 data["auth"] = auth
47
48 try:
49 smtp = get_smtp(**data)
50 msg = MIMEText(text)
51 msg["Subject"] = subject
52 msg["From"] = mailfrom_addr
53 msg["To"] = addr
54
55 smtp.sendmail(msg["From"], [msg["To"]], msg.as_string())
56 smtp.quit()
57 return True, "Email sent"
58 except smtplib.SMTPException as e:
59 return False, str(e)
60 except timeout:
61 return False, "SMTP server connection timed out"
62 except Exception as e:
63 return False, str(e)
64
```
Path: `CTFd/utils/config/__init__.py`
Content:
```
1 import os
2 import time
3
4 from flask import current_app as app
5
6 from CTFd.utils import get_config
7 from CTFd.utils.modes import TEAMS_MODE, USERS_MODE
8
9
10 def ctf_name():
11 name = get_config("ctf_name")
12 return name if name else "CTFd"
13
14
15 def user_mode():
16 return get_config("user_mode")
17
18
19 def is_users_mode():
20 return user_mode() == USERS_MODE
21
22
23 def is_teams_mode():
24 return user_mode() == TEAMS_MODE
25
26
27 def ctf_logo():
28 return get_config("ctf_logo")
29
30
31 def ctf_theme():
32 theme = get_config("ctf_theme")
33 return theme if theme else ""
34
35
36 def is_setup():
37 return bool(get_config("setup")) is True
38
39
40 def is_scoreboard_frozen():
41 freeze = get_config("freeze")
42
43 if freeze:
44 freeze = int(freeze)
45 if freeze < time.time():
46 return True
47
48 return False
49
50
51 def can_send_mail():
52 return mailserver() or mailgun()
53
54
55 def get_mail_provider():
56 if app.config.get("MAIL_SERVER") and app.config.get("MAIL_PORT"):
57 return "smtp"
58 if get_config("mail_server") and get_config("mail_port"):
59 return "smtp"
60 if app.config.get("MAILGUN_API_KEY") and app.config.get("MAILGUN_BASE_URL"):
61 return "mailgun"
62 if get_config("mailgun_api_key") and get_config("mailgun_base_url"):
63 return "mailgun"
64
65
66 def mailgun():
67 if app.config.get("MAILGUN_API_KEY") and app.config.get("MAILGUN_BASE_URL"):
68 return True
69 if get_config("mailgun_api_key") and get_config("mailgun_base_url"):
70 return True
71 return False
72
73
74 def mailserver():
75 if app.config.get("MAIL_SERVER") and app.config.get("MAIL_PORT"):
76 return True
77 if get_config("mail_server") and get_config("mail_port"):
78 return True
79 return False
80
81
82 def get_themes():
83 dir = os.path.join(app.root_path, "themes")
84 return [
85 name
86 for name in os.listdir(dir)
87 if os.path.isdir(os.path.join(dir, name)) and name != "admin"
88 ]
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/config/__init__.py b/CTFd/utils/config/__init__.py
--- a/CTFd/utils/config/__init__.py
+++ b/CTFd/utils/config/__init__.py
@@ -53,14 +53,14 @@
def get_mail_provider():
- if app.config.get("MAIL_SERVER") and app.config.get("MAIL_PORT"):
- return "smtp"
if get_config("mail_server") and get_config("mail_port"):
return "smtp"
- if app.config.get("MAILGUN_API_KEY") and app.config.get("MAILGUN_BASE_URL"):
- return "mailgun"
if get_config("mailgun_api_key") and get_config("mailgun_base_url"):
return "mailgun"
+ if app.config.get("MAIL_SERVER") and app.config.get("MAIL_PORT"):
+ return "smtp"
+ if app.config.get("MAILGUN_API_KEY") and app.config.get("MAILGUN_BASE_URL"):
+ return "mailgun"
def mailgun():
diff --git a/CTFd/utils/email/smtp.py b/CTFd/utils/email/smtp.py
--- a/CTFd/utils/email/smtp.py
+++ b/CTFd/utils/email/smtp.py
@@ -1,5 +1,9 @@
+import six
import smtplib
from email.mime.text import MIMEText
+
+if six.PY3:
+ from email.message import EmailMessage
from socket import timeout
from CTFd.utils import get_app_config, get_config
@@ -47,12 +51,22 @@
try:
smtp = get_smtp(**data)
- msg = MIMEText(text)
+
+ if six.PY2:
+ msg = MIMEText(text)
+ else:
+ msg = EmailMessage()
+ msg.set_content(text)
+
msg["Subject"] = subject
msg["From"] = mailfrom_addr
msg["To"] = addr
- smtp.sendmail(msg["From"], [msg["To"]], msg.as_string())
+ if six.PY2:
+ smtp.sendmail(msg["From"], [msg["To"]], msg.as_string())
+ else:
+ smtp.send_message(msg)
+
smtp.quit()
return True, "Email sent"
except smtplib.SMTPException as e:
| {"golden_diff": "diff --git a/CTFd/utils/config/__init__.py b/CTFd/utils/config/__init__.py\n--- a/CTFd/utils/config/__init__.py\n+++ b/CTFd/utils/config/__init__.py\n@@ -53,14 +53,14 @@\n \n \n def get_mail_provider():\n- if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n- return \"smtp\"\n if get_config(\"mail_server\") and get_config(\"mail_port\"):\n return \"smtp\"\n- if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n- return \"mailgun\"\n if get_config(\"mailgun_api_key\") and get_config(\"mailgun_base_url\"):\n return \"mailgun\"\n+ if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n+ return \"smtp\"\n+ if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n+ return \"mailgun\"\n \n \n def mailgun():\ndiff --git a/CTFd/utils/email/smtp.py b/CTFd/utils/email/smtp.py\n--- a/CTFd/utils/email/smtp.py\n+++ b/CTFd/utils/email/smtp.py\n@@ -1,5 +1,9 @@\n+import six\n import smtplib\n from email.mime.text import MIMEText\n+\n+if six.PY3:\n+ from email.message import EmailMessage\n from socket import timeout\n \n from CTFd.utils import get_app_config, get_config\n@@ -47,12 +51,22 @@\n \n try:\n smtp = get_smtp(**data)\n- msg = MIMEText(text)\n+\n+ if six.PY2:\n+ msg = MIMEText(text)\n+ else:\n+ msg = EmailMessage()\n+ msg.set_content(text)\n+\n msg[\"Subject\"] = subject\n msg[\"From\"] = mailfrom_addr\n msg[\"To\"] = addr\n \n- smtp.sendmail(msg[\"From\"], [msg[\"To\"]], msg.as_string())\n+ if six.PY2:\n+ smtp.sendmail(msg[\"From\"], [msg[\"To\"]], msg.as_string())\n+ else:\n+ smtp.send_message(msg)\n+\n smtp.quit()\n return True, \"Email sent\"\n except smtplib.SMTPException as e:\n", "issue": "Email cannot be sent if CTF name has an accent\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit: 2.3.2\r\n - Operating System: Debian 10.3, Python 3.7.3\r\n - Web Browser and Version: Firefox Nightly, 76.0a1 (2020-04-06)\r\n\r\n**What happened?**\r\n\r\nAfter a fresh install of CTFd we set the competition name to \"Int\u00e9 CTF\". We configure the email server to use our Postfix relay, and test it with the \"forgotten password\" feature.\r\nThe email never arrives and is bounced by relay servers because the `From` header is invalid.\r\n\r\nThe actual issue is because the `From` header has the form \"CTF name <email address>\" (see [here](https://github.com/CTFd/CTFd/blob/master/CTFd/utils/email/smtp.py#L25)). Once encoded in the `msg.as_string()` ([there](https://github.com/CTFd/CTFd/blob/master/CTFd/utils/email/smtp.py#L55)) it is entirely encoded in UTF-8 looking like this `=?utf-8?q?Int=C3=A9_CTF_=3Cctf-noreply=40example=2Ecom=3E?=`, which is invalid against RFC 2407 section 5 (\"An 'encoded-word' MUST NOT appear in any portion of an 'addr-spec'\"). A correct form would be `=?utf-8?q?Int=C3=A9?= CTF <[email protected]>`.\r\n\r\nI have a patch ready but it uses `from email.message import EmailMessage` which is available in Python 3.6+. I'll be happy to open a PR if you confirm me that support for lower Python version can be dropped ^^\r\n\r\n<details>\r\n<summary>Patch for `CTFd/utils/email/smtp.py`</summary>\r\n\r\n```diff\r\n-from email.mime.text import MIMEText\r\n+from email.message import EmailMessage\r\n\r\n\r\n# around line 50\r\n- msg = MIMEText(text)\r\n+ msg = EmailMessage()\r\n+ msg.set_content(text)\r\n msg[\"Subject\"] = subject\r\n msg[\"From\"] = mailfrom_addr\r\n msg[\"To\"] = addr\r\n \r\n- smtp.sendmail(msg[\"From\"], [msg[\"To\"]], msg.as_string())\r\n+ smtp.send_message(msg)\r\n```\r\n\r\n</details>\r\n\r\n**How to reproduce your issue**\r\n\r\n- setup a new instance of CTFd\r\n- configure the competition name with non-ascii characters\r\n- configure the email server to use a \"real\" mail server (not Mailgun)\r\n- try to send an email (with the \"forgotten password\" for example)\r\n- see the mail server logs\r\n\r\n<details>\r\n<summary>\r\nYou can see the difference between both methods (`MIMEText` and `EmailMessage`) with the following snippet\r\n</summary>\r\n\r\n```python\r\nfrom email.mime.text import MIMEText\r\nmsg = MIMEText(\"This is a message with accents \u00e9\u00e8\u00e7\u00e0\")\r\nmsg[\"From\"] = \"Ren\u00e9 C\u00f4ti <[email protected]>\"\r\nmsg[\"To\"] = \"[email protected]\"\r\nmsg.as_string()\r\n\r\nfrom email.message import EmailMessage\r\nmsg = EmailMessage()\r\nmsg.set_content(\"This is a message with accents \u00e9\u00e8\u00e7\u00e0\")\r\nmsg[\"From\"] = \"Ren\u00e9 C\u00f4ti <[email protected]>\"\r\nmsg[\"To\"] = \"[email protected]\"\r\nmsg.as_string()\r\n```\r\n\r\n</details>\r\n\r\n**Any associated stack traces or error logs**\r\n```\r\n# Sent to a gmail recipient\r\nopendkim[839]: 278839FE69: can't parse From: header value ' =?utf-8?q?Int=C3=A9_CTF_=3Cctf-noreply=40example=2Ecom=3E?='\r\npostfix/smtp[8076]: 278839FE69: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.206.27]:25, delay=0.75, delays=0.31/0/0.23/0.21, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.206.27] said: 550-5.7.1 [185.132.74.134 14] Messages missing a valid address in From: 550 5.7.1 header, or having no From: header, are not accepted. p64si4801636wmp.124 - gsmtp (in reply to end of DATA command))\r\n\r\n# Sent to another recipient, with another relay server\r\npostfix/smtpd[7858]: NOQUEUE: reject: RCPT from mailcube2.domain.fr[193.51.52.6]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown\r\n in virtual mailbox table; from=<> to=<[email protected]> proto=ESMTP helo=<mailcube2.domain.fr>\r\n```\n", "before_files": [{"content": "import smtplib\nfrom email.mime.text import MIMEText\nfrom socket import timeout\n\nfrom CTFd.utils import get_app_config, get_config\n\n\ndef get_smtp(host, port, username=None, password=None, TLS=None, SSL=None, auth=None):\n if SSL is None:\n smtp = smtplib.SMTP(host, port, timeout=3)\n else:\n smtp = smtplib.SMTP_SSL(host, port, timeout=3)\n\n if TLS:\n smtp.starttls()\n\n if auth:\n smtp.login(username, password)\n return smtp\n\n\ndef sendmail(addr, text, subject):\n ctf_name = get_config(\"ctf_name\")\n mailfrom_addr = get_config(\"mailfrom_addr\") or get_app_config(\"MAILFROM_ADDR\")\n mailfrom_addr = \"{} <{}>\".format(ctf_name, mailfrom_addr)\n\n data = {\n \"host\": get_config(\"mail_server\") or get_app_config(\"MAIL_SERVER\"),\n \"port\": int(get_config(\"mail_port\") or get_app_config(\"MAIL_PORT\")),\n }\n username = get_config(\"mail_username\") or get_app_config(\"MAIL_USERNAME\")\n password = get_config(\"mail_password\") or get_app_config(\"MAIL_PASSWORD\")\n TLS = get_config(\"mail_tls\") or get_app_config(\"MAIL_TLS\")\n SSL = get_config(\"mail_ssl\") or get_app_config(\"MAIL_SSL\")\n auth = get_config(\"mail_useauth\") or get_app_config(\"MAIL_USEAUTH\")\n\n if username:\n data[\"username\"] = username\n if password:\n data[\"password\"] = password\n if TLS:\n data[\"TLS\"] = TLS\n if SSL:\n data[\"SSL\"] = SSL\n if auth:\n data[\"auth\"] = auth\n\n try:\n smtp = get_smtp(**data)\n msg = MIMEText(text)\n msg[\"Subject\"] = subject\n msg[\"From\"] = mailfrom_addr\n msg[\"To\"] = addr\n\n smtp.sendmail(msg[\"From\"], [msg[\"To\"]], msg.as_string())\n smtp.quit()\n return True, \"Email sent\"\n except smtplib.SMTPException as e:\n return False, str(e)\n except timeout:\n return False, \"SMTP server connection timed out\"\n except Exception as e:\n return False, str(e)\n", "path": "CTFd/utils/email/smtp.py"}, {"content": "import os\nimport time\n\nfrom flask import current_app as app\n\nfrom CTFd.utils import get_config\nfrom CTFd.utils.modes import TEAMS_MODE, USERS_MODE\n\n\ndef ctf_name():\n name = get_config(\"ctf_name\")\n return name if name else \"CTFd\"\n\n\ndef user_mode():\n return get_config(\"user_mode\")\n\n\ndef is_users_mode():\n return user_mode() == USERS_MODE\n\n\ndef is_teams_mode():\n return user_mode() == TEAMS_MODE\n\n\ndef ctf_logo():\n return get_config(\"ctf_logo\")\n\n\ndef ctf_theme():\n theme = get_config(\"ctf_theme\")\n return theme if theme else \"\"\n\n\ndef is_setup():\n return bool(get_config(\"setup\")) is True\n\n\ndef is_scoreboard_frozen():\n freeze = get_config(\"freeze\")\n\n if freeze:\n freeze = int(freeze)\n if freeze < time.time():\n return True\n\n return False\n\n\ndef can_send_mail():\n return mailserver() or mailgun()\n\n\ndef get_mail_provider():\n if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n return \"smtp\"\n if get_config(\"mail_server\") and get_config(\"mail_port\"):\n return \"smtp\"\n if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n return \"mailgun\"\n if get_config(\"mailgun_api_key\") and get_config(\"mailgun_base_url\"):\n return \"mailgun\"\n\n\ndef mailgun():\n if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n return True\n if get_config(\"mailgun_api_key\") and get_config(\"mailgun_base_url\"):\n return True\n return False\n\n\ndef mailserver():\n if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n return True\n if get_config(\"mail_server\") and get_config(\"mail_port\"):\n return True\n return False\n\n\ndef get_themes():\n dir = os.path.join(app.root_path, \"themes\")\n return [\n name\n for name in os.listdir(dir)\n if os.path.isdir(os.path.join(dir, name)) and name != \"admin\"\n ]\n", "path": "CTFd/utils/config/__init__.py"}], "after_files": [{"content": "import six\nimport smtplib\nfrom email.mime.text import MIMEText\n\nif six.PY3:\n from email.message import EmailMessage\nfrom socket import timeout\n\nfrom CTFd.utils import get_app_config, get_config\n\n\ndef get_smtp(host, port, username=None, password=None, TLS=None, SSL=None, auth=None):\n if SSL is None:\n smtp = smtplib.SMTP(host, port, timeout=3)\n else:\n smtp = smtplib.SMTP_SSL(host, port, timeout=3)\n\n if TLS:\n smtp.starttls()\n\n if auth:\n smtp.login(username, password)\n return smtp\n\n\ndef sendmail(addr, text, subject):\n ctf_name = get_config(\"ctf_name\")\n mailfrom_addr = get_config(\"mailfrom_addr\") or get_app_config(\"MAILFROM_ADDR\")\n mailfrom_addr = \"{} <{}>\".format(ctf_name, mailfrom_addr)\n\n data = {\n \"host\": get_config(\"mail_server\") or get_app_config(\"MAIL_SERVER\"),\n \"port\": int(get_config(\"mail_port\") or get_app_config(\"MAIL_PORT\")),\n }\n username = get_config(\"mail_username\") or get_app_config(\"MAIL_USERNAME\")\n password = get_config(\"mail_password\") or get_app_config(\"MAIL_PASSWORD\")\n TLS = get_config(\"mail_tls\") or get_app_config(\"MAIL_TLS\")\n SSL = get_config(\"mail_ssl\") or get_app_config(\"MAIL_SSL\")\n auth = get_config(\"mail_useauth\") or get_app_config(\"MAIL_USEAUTH\")\n\n if username:\n data[\"username\"] = username\n if password:\n data[\"password\"] = password\n if TLS:\n data[\"TLS\"] = TLS\n if SSL:\n data[\"SSL\"] = SSL\n if auth:\n data[\"auth\"] = auth\n\n try:\n smtp = get_smtp(**data)\n\n if six.PY2:\n msg = MIMEText(text)\n else:\n msg = EmailMessage()\n msg.set_content(text)\n\n msg[\"Subject\"] = subject\n msg[\"From\"] = mailfrom_addr\n msg[\"To\"] = addr\n\n if six.PY2:\n smtp.sendmail(msg[\"From\"], [msg[\"To\"]], msg.as_string())\n else:\n smtp.send_message(msg)\n\n smtp.quit()\n return True, \"Email sent\"\n except smtplib.SMTPException as e:\n return False, str(e)\n except timeout:\n return False, \"SMTP server connection timed out\"\n except Exception as e:\n return False, str(e)\n", "path": "CTFd/utils/email/smtp.py"}, {"content": "import os\nimport time\n\nfrom flask import current_app as app\n\nfrom CTFd.utils import get_config\nfrom CTFd.utils.modes import TEAMS_MODE, USERS_MODE\n\n\ndef ctf_name():\n name = get_config(\"ctf_name\")\n return name if name else \"CTFd\"\n\n\ndef user_mode():\n return get_config(\"user_mode\")\n\n\ndef is_users_mode():\n return user_mode() == USERS_MODE\n\n\ndef is_teams_mode():\n return user_mode() == TEAMS_MODE\n\n\ndef ctf_logo():\n return get_config(\"ctf_logo\")\n\n\ndef ctf_theme():\n theme = get_config(\"ctf_theme\")\n return theme if theme else \"\"\n\n\ndef is_setup():\n return bool(get_config(\"setup\")) is True\n\n\ndef is_scoreboard_frozen():\n freeze = get_config(\"freeze\")\n\n if freeze:\n freeze = int(freeze)\n if freeze < time.time():\n return True\n\n return False\n\n\ndef can_send_mail():\n return mailserver() or mailgun()\n\n\ndef get_mail_provider():\n if get_config(\"mail_server\") and get_config(\"mail_port\"):\n return \"smtp\"\n if get_config(\"mailgun_api_key\") and get_config(\"mailgun_base_url\"):\n return \"mailgun\"\n if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n return \"smtp\"\n if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n return \"mailgun\"\n\n\ndef mailgun():\n if app.config.get(\"MAILGUN_API_KEY\") and app.config.get(\"MAILGUN_BASE_URL\"):\n return True\n if get_config(\"mailgun_api_key\") and get_config(\"mailgun_base_url\"):\n return True\n return False\n\n\ndef mailserver():\n if app.config.get(\"MAIL_SERVER\") and app.config.get(\"MAIL_PORT\"):\n return True\n if get_config(\"mail_server\") and get_config(\"mail_port\"):\n return True\n return False\n\n\ndef get_themes():\n dir = os.path.join(app.root_path, \"themes\")\n return [\n name\n for name in os.listdir(dir)\n if os.path.isdir(os.path.join(dir, name)) and name != \"admin\"\n ]\n", "path": "CTFd/utils/config/__init__.py"}]} | 2,743 | 510 |
gh_patches_debug_29165 | rasdani/github-patches | git_diff | spack__spack-7545 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gcc v5.4.0 build fails due to mpfr patching problem
There seems to be a patch application issue in the mpfr-3.1.5 build procedure
I was expecting something like my previous build:
```
==> Installing mpfr
==> Fetching file://MIRROR_DIR/mirror/mpfr/mpfr-3.1.5.tar.bz2
==> Staging archive: WORKING_DIR/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5.tar.bz2
==> Created stage in WORKING_DIR/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5
==> Applied patch vasprintf.patch
==> Applied patch strtofr.patch
==> Building mpfr [AutotoolsPackage]
==> Executing phase: 'autoreconf'
==> Executing phase: 'configure'
==> Executing phase: 'build'
==> Executing phase: 'install'
==> Successfully installed mpfr
Fetch: 0.04s. Build: 9.54s. Total: 9.58s.
[+] WORKING_DIR/opt/spack/linux-centos7-x86_64/gcc-4.8.5/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5
```
When I tried to build the gcc compiler yesterday (and again this morning) the results were strange:
```
==> Installing mpfr
1 out of 1 hunk FAILED -- saving rejects to file VERSION.rej
1 out of 1 hunk FAILED -- saving rejects to file src/mpfr.h.rej
1 out of 1 hunk FAILED -- saving rejects to file src/version.c.rej
==> Fetching file://MIRROR_DIR/mirror/mpfr/mpfr-3.1.5.tar.bz2
==> Staging archive: WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5.tar.bz2
==> Created stage in WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5
==> Patch strtofr.patch failed.
==> Error: ProcessError: Command exited with status 1:
'/usr/bin/patch' '-s' '-p' '1' '-i' 'WORKING_DIR/sat/spack/var/spack/repos/builtin/packages/mpfr/strtofr.patch' '-d' '.'
==> Error: [Errno 2] No such file or directory: 'WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5/spack-build.out'
```
Not only the error, but the order of the messages seem strange.
A clean clone of the spack repo made no difference
```console
$ spack install [email protected]
```
Default environment:
```linux-centos7-x86_64/gcc-4.8.5```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/mpfr/package.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/spack/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class Mpfr(AutotoolsPackage):
29 """The MPFR library is a C library for multiple-precision
30 floating-point computations with correct rounding."""
31
32 homepage = "http://www.mpfr.org"
33 url = "https://ftp.gnu.org/gnu/mpfr/mpfr-3.1.5.tar.bz2"
34
35 version('3.1.5', 'b1d23a55588e3b2a13e3be66bc69fd8d')
36 version('3.1.4', 'b8a2f6b0e68bef46e53da2ac439e1cf4')
37 version('3.1.3', '5fdfa3cfa5c86514ee4a241a1affa138')
38 version('3.1.2', 'ee2c3ac63bf0c2359bf08fc3ee094c19')
39
40 # mpir is a drop-in replacement for gmp
41 depends_on('[email protected]:') # 4.2.3 or higher is recommended
42
43 patch('vasprintf.patch', when='@3.1.5')
44 patch('strtofr.patch', when='@3.1.5')
45
46 def configure_args(self):
47 args = [
48 '--with-gmp=' + self.spec['gmp'].prefix,
49 ]
50 return args
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/mpfr/package.py b/var/spack/repos/builtin/packages/mpfr/package.py
--- a/var/spack/repos/builtin/packages/mpfr/package.py
+++ b/var/spack/repos/builtin/packages/mpfr/package.py
@@ -30,18 +30,33 @@
floating-point computations with correct rounding."""
homepage = "http://www.mpfr.org"
- url = "https://ftp.gnu.org/gnu/mpfr/mpfr-3.1.5.tar.bz2"
+ url = "https://ftp.gnu.org/gnu/mpfr/mpfr-4.0.1.tar.bz2"
+ version('4.0.1', '8c21d8ac7460493b2b9f3ef3cc610454')
+ version('4.0.0', 'ef619f3bb68039e35c4a219e06be72d0')
+ version('3.1.6', '320c28198def956aeacdb240b46b8969')
version('3.1.5', 'b1d23a55588e3b2a13e3be66bc69fd8d')
version('3.1.4', 'b8a2f6b0e68bef46e53da2ac439e1cf4')
version('3.1.3', '5fdfa3cfa5c86514ee4a241a1affa138')
version('3.1.2', 'ee2c3ac63bf0c2359bf08fc3ee094c19')
# mpir is a drop-in replacement for gmp
- depends_on('[email protected]:') # 4.2.3 or higher is recommended
+ depends_on('[email protected]:') # 4.2.3 or higher is recommended
+ depends_on('[email protected]:', when='@4.0.0:') # http://www.mpfr.org/mpfr-4.0.0/
- patch('vasprintf.patch', when='@3.1.5')
- patch('strtofr.patch', when='@3.1.5')
+ # Check the Bugs section of old release pages for patches.
+ # http://www.mpfr.org/mpfr-X.Y.Z/#bugs
+ patches = {
+ '3.1.6': '66a5d58364113a21405fc53f4a48f4e8',
+ '3.1.5': '1dc5fe65feb5607b89fe0f410d53b627',
+ '3.1.4': 'd124381573404fe83654c7d5a79aeabf',
+ '3.1.3': 'ebd1d835e0ae2fd8a9339210ccd1d0a8',
+ '3.1.2': '9f96a5c7cac1d6cd983ed9cf7d997074',
+ }
+
+ for ver, checksum in patches.items():
+ patch('http://www.mpfr.org/mpfr-{0}/allpatches'.format(ver),
+ when='@' + ver, sha256=checksum)
def configure_args(self):
args = [
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/mpfr/package.py b/var/spack/repos/builtin/packages/mpfr/package.py\n--- a/var/spack/repos/builtin/packages/mpfr/package.py\n+++ b/var/spack/repos/builtin/packages/mpfr/package.py\n@@ -30,18 +30,33 @@\n floating-point computations with correct rounding.\"\"\"\n \n homepage = \"http://www.mpfr.org\"\n- url = \"https://ftp.gnu.org/gnu/mpfr/mpfr-3.1.5.tar.bz2\"\n+ url = \"https://ftp.gnu.org/gnu/mpfr/mpfr-4.0.1.tar.bz2\"\n \n+ version('4.0.1', '8c21d8ac7460493b2b9f3ef3cc610454')\n+ version('4.0.0', 'ef619f3bb68039e35c4a219e06be72d0')\n+ version('3.1.6', '320c28198def956aeacdb240b46b8969')\n version('3.1.5', 'b1d23a55588e3b2a13e3be66bc69fd8d')\n version('3.1.4', 'b8a2f6b0e68bef46e53da2ac439e1cf4')\n version('3.1.3', '5fdfa3cfa5c86514ee4a241a1affa138')\n version('3.1.2', 'ee2c3ac63bf0c2359bf08fc3ee094c19')\n \n # mpir is a drop-in replacement for gmp\n- depends_on('[email protected]:') # 4.2.3 or higher is recommended\n+ depends_on('[email protected]:') # 4.2.3 or higher is recommended\n+ depends_on('[email protected]:', when='@4.0.0:') # http://www.mpfr.org/mpfr-4.0.0/\n \n- patch('vasprintf.patch', when='@3.1.5')\n- patch('strtofr.patch', when='@3.1.5')\n+ # Check the Bugs section of old release pages for patches.\n+ # http://www.mpfr.org/mpfr-X.Y.Z/#bugs\n+ patches = {\n+ '3.1.6': '66a5d58364113a21405fc53f4a48f4e8',\n+ '3.1.5': '1dc5fe65feb5607b89fe0f410d53b627',\n+ '3.1.4': 'd124381573404fe83654c7d5a79aeabf',\n+ '3.1.3': 'ebd1d835e0ae2fd8a9339210ccd1d0a8',\n+ '3.1.2': '9f96a5c7cac1d6cd983ed9cf7d997074',\n+ }\n+\n+ for ver, checksum in patches.items():\n+ patch('http://www.mpfr.org/mpfr-{0}/allpatches'.format(ver),\n+ when='@' + ver, sha256=checksum)\n \n def configure_args(self):\n args = [\n", "issue": "gcc v5.4.0 build fails due to mpfr patching problem\nThere seems to be a patch application issue in the mpfr-3.1.5 build procedure\r\n\r\nI was expecting something like my previous build:\r\n```\r\n==> Installing mpfr\r\n==> Fetching file://MIRROR_DIR/mirror/mpfr/mpfr-3.1.5.tar.bz2\r\n==> Staging archive: WORKING_DIR/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5.tar.bz2\r\n==> Created stage in WORKING_DIR/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5\r\n==> Applied patch vasprintf.patch\r\n==> Applied patch strtofr.patch\r\n==> Building mpfr [AutotoolsPackage]\r\n==> Executing phase: 'autoreconf'\r\n==> Executing phase: 'configure'\r\n==> Executing phase: 'build'\r\n==> Executing phase: 'install'\r\n==> Successfully installed mpfr\r\n Fetch: 0.04s. Build: 9.54s. Total: 9.58s.\r\n[+] WORKING_DIR/opt/spack/linux-centos7-x86_64/gcc-4.8.5/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5\r\n```\r\nWhen I tried to build the gcc compiler yesterday (and again this morning) the results were strange:\r\n```\r\n==> Installing mpfr\r\n1 out of 1 hunk FAILED -- saving rejects to file VERSION.rej\r\n1 out of 1 hunk FAILED -- saving rejects to file src/mpfr.h.rej\r\n1 out of 1 hunk FAILED -- saving rejects to file src/version.c.rej\r\n==> Fetching file://MIRROR_DIR/mirror/mpfr/mpfr-3.1.5.tar.bz2\r\n==> Staging archive: WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5.tar.bz2\r\n==> Created stage in WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5\r\n==> Patch strtofr.patch failed.\r\n==> Error: ProcessError: Command exited with status 1:\r\n '/usr/bin/patch' '-s' '-p' '1' '-i' 'WORKING_DIR/sat/spack/var/spack/repos/builtin/packages/mpfr/strtofr.patch' '-d' '.'\r\n==> Error: [Errno 2] No such file or directory: 'WORKING_DIR/sat/spack/var/spack/stage/mpfr-3.1.5-rmi7bmi3oaqduvjown2v46snr6ps2zr5/mpfr-3.1.5/spack-build.out'\r\n```\r\nNot only the error, but the order of the messages seem strange.\r\n\r\nA clean clone of the spack repo made no difference\r\n```console\r\n$ spack install [email protected]\r\n```\r\n\r\nDefault environment:\r\n```linux-centos7-x86_64/gcc-4.8.5```\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Mpfr(AutotoolsPackage):\n \"\"\"The MPFR library is a C library for multiple-precision\n floating-point computations with correct rounding.\"\"\"\n\n homepage = \"http://www.mpfr.org\"\n url = \"https://ftp.gnu.org/gnu/mpfr/mpfr-3.1.5.tar.bz2\"\n\n version('3.1.5', 'b1d23a55588e3b2a13e3be66bc69fd8d')\n version('3.1.4', 'b8a2f6b0e68bef46e53da2ac439e1cf4')\n version('3.1.3', '5fdfa3cfa5c86514ee4a241a1affa138')\n version('3.1.2', 'ee2c3ac63bf0c2359bf08fc3ee094c19')\n\n # mpir is a drop-in replacement for gmp\n depends_on('[email protected]:') # 4.2.3 or higher is recommended\n\n patch('vasprintf.patch', when='@3.1.5')\n patch('strtofr.patch', when='@3.1.5')\n\n def configure_args(self):\n args = [\n '--with-gmp=' + self.spec['gmp'].prefix,\n ]\n return args\n", "path": "var/spack/repos/builtin/packages/mpfr/package.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Mpfr(AutotoolsPackage):\n \"\"\"The MPFR library is a C library for multiple-precision\n floating-point computations with correct rounding.\"\"\"\n\n homepage = \"http://www.mpfr.org\"\n url = \"https://ftp.gnu.org/gnu/mpfr/mpfr-4.0.1.tar.bz2\"\n\n version('4.0.1', '8c21d8ac7460493b2b9f3ef3cc610454')\n version('4.0.0', 'ef619f3bb68039e35c4a219e06be72d0')\n version('3.1.6', '320c28198def956aeacdb240b46b8969')\n version('3.1.5', 'b1d23a55588e3b2a13e3be66bc69fd8d')\n version('3.1.4', 'b8a2f6b0e68bef46e53da2ac439e1cf4')\n version('3.1.3', '5fdfa3cfa5c86514ee4a241a1affa138')\n version('3.1.2', 'ee2c3ac63bf0c2359bf08fc3ee094c19')\n\n # mpir is a drop-in replacement for gmp\n depends_on('[email protected]:') # 4.2.3 or higher is recommended\n depends_on('[email protected]:', when='@4.0.0:') # http://www.mpfr.org/mpfr-4.0.0/\n\n # Check the Bugs section of old release pages for patches.\n # http://www.mpfr.org/mpfr-X.Y.Z/#bugs\n patches = {\n '3.1.6': '66a5d58364113a21405fc53f4a48f4e8',\n '3.1.5': '1dc5fe65feb5607b89fe0f410d53b627',\n '3.1.4': 'd124381573404fe83654c7d5a79aeabf',\n '3.1.3': 'ebd1d835e0ae2fd8a9339210ccd1d0a8',\n '3.1.2': '9f96a5c7cac1d6cd983ed9cf7d997074',\n }\n\n for ver, checksum in patches.items():\n patch('http://www.mpfr.org/mpfr-{0}/allpatches'.format(ver),\n when='@' + ver, sha256=checksum)\n\n def configure_args(self):\n args = [\n '--with-gmp=' + self.spec['gmp'].prefix,\n ]\n return args\n", "path": "var/spack/repos/builtin/packages/mpfr/package.py"}]} | 1,729 | 853 |
gh_patches_debug_4654 | rasdani/github-patches | git_diff | NVIDIA__NeMo-2546 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vad_infer.py script fails with `IndexError` if option `--dont_auto_split` is not set
**Describe the bug**
`NeMo/examples/asr/vad_infer.py` script fails with `IndexError` if option `--dont_auto_split` is not set and at least 1 `.wav` file is long enough so that `nemo.collections.asr.parts.utils.vad_utils.prepare_manifest()` function split the `.wav` file.
**Steps/Code to reproduce bug**
```
mkdir -p ~/debug_data
wget http://i13pc106.ira.uka.de/~jniehues/IWSLT-SLT/data/eval/en-de/IWSLT-SLT.tst2019.en-de.tgz -O ~/debug_data/IWSLT-SLT.tst2019.en-de.tgz
tar xzf ~/debug_data/IWSLT-SLT.tst2019.en-de.tgz -C ~/debug_data/
cd ~/NeMo/examples/asr/
wget https://raw.githubusercontent.com/NVIDIA/NeMo/feat/asr/iwslt_audio_to_nemo_format/scripts/dataset_processing/prepare_iwslt_audio_data.py
python prepare_iwslt_audio_data.py -a ~/debug_data/IWSLT.tst2019/wavs/ -t ~/debug_data/IWSLT.tst2019/IWSLT.TED.tst2019.en-de.en.xml -o ~/debug_data/IWSLT.tst2019/manifest.json
python vad_infer.py --dataset ~/debug_data/IWSLT.tst2019/manifest.json --out_dir ~/debug_data/IWSLT.tst2019/vad --vad_model vad_marblenet
```
**Expected behavior**
No errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: pip install nemo_toolkit[all]
**Environment details**
If NVIDIA docker image is used you don't need to specify these.
Otherwise, please provide:
- OS version: Ubuntu 20.04.2 LTS
- PyTorch version: 1.8.1
- Python version: 3.8.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/asr/vad_infer.py`
Content:
```
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 During inference, we perform frame-level prediction by two approaches:
17 1) shift the window of length time_length (e.g. 0.63s) by shift_length (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame;
18 [this script demonstrate how to do this approach]
19 2) generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments.
20 [get frame level prediction by this script and use vad_overlap_posterior.py in NeMo/scripts/voice_activity_detection
21 One can also find posterior about converting frame level prediction
22 to speech/no-speech segment in start and end times format in that script.]
23
24 Image https://raw.githubusercontent.com/NVIDIA/NeMo/main/tutorials/asr/images/vad_post_overlap_diagram.png
25 will help you understand this method.
26
27 Usage:
28 python vad_infer.py --vad_model="vad_marblenet" --dataset=<FULL PATH OF MANIFEST TO BE PERFORMED INFERENCE ON> --out_dir='frame/demo' --time_length=0.63
29
30 """
31
32
33 import json
34 import logging
35 import os
36 from argparse import ArgumentParser
37
38 import torch
39
40 from nemo.collections.asr.models import EncDecClassificationModel
41 from nemo.collections.asr.parts.utils.vad_utils import get_vad_stream_status, prepare_manifest
42 from nemo.utils import logging
43
44 try:
45 from torch.cuda.amp import autocast
46 except ImportError:
47 from contextlib import contextmanager
48
49 @contextmanager
50 def autocast(enabled=None):
51 yield
52
53
54 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
55
56
57 def main():
58 parser = ArgumentParser()
59 parser.add_argument(
60 "--vad_model", type=str, default="MatchboxNet-VAD-3x2", required=False, help="Pass: 'MatchboxNet-VAD-3x2'"
61 )
62 parser.add_argument(
63 "--dataset",
64 type=str,
65 required=True,
66 help="Path of json file of evaluation data. Audio files should have unique names.",
67 )
68 parser.add_argument("--out_dir", type=str, default="vad_frame", help="Dir of your vad outputs")
69 parser.add_argument("--time_length", type=float, default=0.63)
70 parser.add_argument("--shift_length", type=float, default=0.01)
71 parser.add_argument("--normalize_audio", type=bool, default=False)
72 parser.add_argument("--num_workers", type=float, default=20)
73 parser.add_argument("--split_duration", type=float, default=400)
74 parser.add_argument(
75 "--dont_auto_split",
76 default=False,
77 action='store_true',
78 help="Whether to automatically split manifest entry by split_duration to avoid potential CUDA out of memory issue.",
79 )
80
81 args = parser.parse_args()
82
83 torch.set_grad_enabled(False)
84
85 if args.vad_model.endswith('.nemo'):
86 logging.info(f"Using local VAD model from {args.vad_model}")
87 vad_model = EncDecClassificationModel.restore_from(restore_path=args.vad_model)
88 else:
89 logging.info(f"Using NGC cloud VAD model {args.vad_model}")
90 vad_model = EncDecClassificationModel.from_pretrained(model_name=args.vad_model)
91
92 if not os.path.exists(args.out_dir):
93 os.mkdir(args.out_dir)
94
95 # Prepare manifest for streaming VAD
96 manifest_vad_input = args.dataset
97 if not args.dont_auto_split:
98 logging.info("Split long audio file to avoid CUDA memory issue")
99 logging.debug("Try smaller split_duration if you still have CUDA memory issue")
100 config = {
101 'manifest_filepath': manifest_vad_input,
102 'time_length': args.time_length,
103 'split_duration': args.split_duration,
104 'num_workers': args.num_workers,
105 }
106 manifest_vad_input = prepare_manifest(config)
107 else:
108 logging.warning(
109 "If you encounter CUDA memory issue, try splitting manifest entry by split_duration to avoid it."
110 )
111
112 # setup_test_data
113 vad_model.setup_test_data(
114 test_data_config={
115 'vad_stream': True,
116 'sample_rate': 16000,
117 'manifest_filepath': manifest_vad_input,
118 'labels': ['infer',],
119 'num_workers': args.num_workers,
120 'shuffle': False,
121 'time_length': args.time_length,
122 'shift_length': args.shift_length,
123 'trim_silence': False,
124 'normalize_audio': args.normalize_audio,
125 }
126 )
127
128 vad_model = vad_model.to(device)
129 vad_model.eval()
130
131 time_unit = int(args.time_length / args.shift_length)
132 trunc = int(time_unit / 2)
133 trunc_l = time_unit - trunc
134 all_len = 0
135
136 data = []
137 for line in open(args.dataset, 'r'):
138 file = json.loads(line)['audio_filepath'].split("/")[-1]
139 data.append(file.split(".wav")[0])
140 logging.info(f"Inference on {len(data)} audio files/json lines!")
141
142 status = get_vad_stream_status(data)
143 for i, test_batch in enumerate(vad_model.test_dataloader()):
144 test_batch = [x.to(device) for x in test_batch]
145 with autocast():
146 log_probs = vad_model(input_signal=test_batch[0], input_signal_length=test_batch[1])
147 probs = torch.softmax(log_probs, dim=-1)
148 pred = probs[:, 1]
149
150 if status[i] == 'start':
151 to_save = pred[:-trunc]
152 elif status[i] == 'next':
153 to_save = pred[trunc:-trunc_l]
154 elif status[i] == 'end':
155 to_save = pred[trunc_l:]
156 else:
157 to_save = pred
158
159 all_len += len(to_save)
160 outpath = os.path.join(args.out_dir, data[i] + ".frame")
161 with open(outpath, "a") as fout:
162 for f in range(len(to_save)):
163 fout.write('{0:0.4f}\n'.format(to_save[f]))
164 del test_batch
165 if status[i] == 'end' or status[i] == 'single':
166 logging.debug(f"Overall length of prediction of {data[i]} is {all_len}!")
167 all_len = 0
168
169
170 if __name__ == '__main__':
171 main()
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/asr/vad_infer.py b/examples/asr/vad_infer.py
--- a/examples/asr/vad_infer.py
+++ b/examples/asr/vad_infer.py
@@ -134,7 +134,7 @@
all_len = 0
data = []
- for line in open(args.dataset, 'r'):
+ for line in open(manifest_vad_input, 'r'):
file = json.loads(line)['audio_filepath'].split("/")[-1]
data.append(file.split(".wav")[0])
logging.info(f"Inference on {len(data)} audio files/json lines!")
| {"golden_diff": "diff --git a/examples/asr/vad_infer.py b/examples/asr/vad_infer.py\n--- a/examples/asr/vad_infer.py\n+++ b/examples/asr/vad_infer.py\n@@ -134,7 +134,7 @@\n all_len = 0\n \n data = []\n- for line in open(args.dataset, 'r'):\n+ for line in open(manifest_vad_input, 'r'):\n file = json.loads(line)['audio_filepath'].split(\"/\")[-1]\n data.append(file.split(\".wav\")[0])\n logging.info(f\"Inference on {len(data)} audio files/json lines!\")\n", "issue": "vad_infer.py script fails with `IndexError` if option `--dont_auto_split` is not set\n**Describe the bug**\r\n\r\n`NeMo/examples/asr/vad_infer.py` script fails with `IndexError` if option `--dont_auto_split` is not set and at least 1 `.wav` file is long enough so that `nemo.collections.asr.parts.utils.vad_utils.prepare_manifest()` function split the `.wav` file.\r\n\r\n**Steps/Code to reproduce bug**\r\n\r\n```\r\nmkdir -p ~/debug_data\r\nwget http://i13pc106.ira.uka.de/~jniehues/IWSLT-SLT/data/eval/en-de/IWSLT-SLT.tst2019.en-de.tgz -O ~/debug_data/IWSLT-SLT.tst2019.en-de.tgz\r\ntar xzf ~/debug_data/IWSLT-SLT.tst2019.en-de.tgz -C ~/debug_data/\r\ncd ~/NeMo/examples/asr/\r\nwget https://raw.githubusercontent.com/NVIDIA/NeMo/feat/asr/iwslt_audio_to_nemo_format/scripts/dataset_processing/prepare_iwslt_audio_data.py\r\npython prepare_iwslt_audio_data.py -a ~/debug_data/IWSLT.tst2019/wavs/ -t ~/debug_data/IWSLT.tst2019/IWSLT.TED.tst2019.en-de.en.xml -o ~/debug_data/IWSLT.tst2019/manifest.json\r\npython vad_infer.py --dataset ~/debug_data/IWSLT.tst2019/manifest.json --out_dir ~/debug_data/IWSLT.tst2019/vad --vad_model vad_marblenet\r\n```\r\n\r\n\r\n**Expected behavior**\r\n\r\nNo errors\r\n\r\n**Environment overview (please complete the following information)**\r\n\r\n - Environment location: Bare-metal\r\n - Method of NeMo install: pip install nemo_toolkit[all]\r\n\r\n**Environment details**\r\n\r\nIf NVIDIA docker image is used you don't need to specify these.\r\nOtherwise, please provide:\r\n- OS version: Ubuntu 20.04.2 LTS\r\n- PyTorch version: 1.8.1\r\n- Python version: 3.8.10\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nDuring inference, we perform frame-level prediction by two approaches: \n 1) shift the window of length time_length (e.g. 0.63s) by shift_length (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame;\n [this script demonstrate how to do this approach]\n 2) generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. \n [get frame level prediction by this script and use vad_overlap_posterior.py in NeMo/scripts/voice_activity_detection\n One can also find posterior about converting frame level prediction \n to speech/no-speech segment in start and end times format in that script.]\n \n Image https://raw.githubusercontent.com/NVIDIA/NeMo/main/tutorials/asr/images/vad_post_overlap_diagram.png \n will help you understand this method.\n \nUsage:\npython vad_infer.py --vad_model=\"vad_marblenet\" --dataset=<FULL PATH OF MANIFEST TO BE PERFORMED INFERENCE ON> --out_dir='frame/demo' --time_length=0.63\n\n\"\"\"\n\n\nimport json\nimport logging\nimport os\nfrom argparse import ArgumentParser\n\nimport torch\n\nfrom nemo.collections.asr.models import EncDecClassificationModel\nfrom nemo.collections.asr.parts.utils.vad_utils import get_vad_stream_status, prepare_manifest\nfrom nemo.utils import logging\n\ntry:\n from torch.cuda.amp import autocast\nexcept ImportError:\n from contextlib import contextmanager\n\n @contextmanager\n def autocast(enabled=None):\n yield\n\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument(\n \"--vad_model\", type=str, default=\"MatchboxNet-VAD-3x2\", required=False, help=\"Pass: 'MatchboxNet-VAD-3x2'\"\n )\n parser.add_argument(\n \"--dataset\",\n type=str,\n required=True,\n help=\"Path of json file of evaluation data. Audio files should have unique names.\",\n )\n parser.add_argument(\"--out_dir\", type=str, default=\"vad_frame\", help=\"Dir of your vad outputs\")\n parser.add_argument(\"--time_length\", type=float, default=0.63)\n parser.add_argument(\"--shift_length\", type=float, default=0.01)\n parser.add_argument(\"--normalize_audio\", type=bool, default=False)\n parser.add_argument(\"--num_workers\", type=float, default=20)\n parser.add_argument(\"--split_duration\", type=float, default=400)\n parser.add_argument(\n \"--dont_auto_split\",\n default=False,\n action='store_true',\n help=\"Whether to automatically split manifest entry by split_duration to avoid potential CUDA out of memory issue.\",\n )\n\n args = parser.parse_args()\n\n torch.set_grad_enabled(False)\n\n if args.vad_model.endswith('.nemo'):\n logging.info(f\"Using local VAD model from {args.vad_model}\")\n vad_model = EncDecClassificationModel.restore_from(restore_path=args.vad_model)\n else:\n logging.info(f\"Using NGC cloud VAD model {args.vad_model}\")\n vad_model = EncDecClassificationModel.from_pretrained(model_name=args.vad_model)\n\n if not os.path.exists(args.out_dir):\n os.mkdir(args.out_dir)\n\n # Prepare manifest for streaming VAD\n manifest_vad_input = args.dataset\n if not args.dont_auto_split:\n logging.info(\"Split long audio file to avoid CUDA memory issue\")\n logging.debug(\"Try smaller split_duration if you still have CUDA memory issue\")\n config = {\n 'manifest_filepath': manifest_vad_input,\n 'time_length': args.time_length,\n 'split_duration': args.split_duration,\n 'num_workers': args.num_workers,\n }\n manifest_vad_input = prepare_manifest(config)\n else:\n logging.warning(\n \"If you encounter CUDA memory issue, try splitting manifest entry by split_duration to avoid it.\"\n )\n\n # setup_test_data\n vad_model.setup_test_data(\n test_data_config={\n 'vad_stream': True,\n 'sample_rate': 16000,\n 'manifest_filepath': manifest_vad_input,\n 'labels': ['infer',],\n 'num_workers': args.num_workers,\n 'shuffle': False,\n 'time_length': args.time_length,\n 'shift_length': args.shift_length,\n 'trim_silence': False,\n 'normalize_audio': args.normalize_audio,\n }\n )\n\n vad_model = vad_model.to(device)\n vad_model.eval()\n\n time_unit = int(args.time_length / args.shift_length)\n trunc = int(time_unit / 2)\n trunc_l = time_unit - trunc\n all_len = 0\n\n data = []\n for line in open(args.dataset, 'r'):\n file = json.loads(line)['audio_filepath'].split(\"/\")[-1]\n data.append(file.split(\".wav\")[0])\n logging.info(f\"Inference on {len(data)} audio files/json lines!\")\n\n status = get_vad_stream_status(data)\n for i, test_batch in enumerate(vad_model.test_dataloader()):\n test_batch = [x.to(device) for x in test_batch]\n with autocast():\n log_probs = vad_model(input_signal=test_batch[0], input_signal_length=test_batch[1])\n probs = torch.softmax(log_probs, dim=-1)\n pred = probs[:, 1]\n\n if status[i] == 'start':\n to_save = pred[:-trunc]\n elif status[i] == 'next':\n to_save = pred[trunc:-trunc_l]\n elif status[i] == 'end':\n to_save = pred[trunc_l:]\n else:\n to_save = pred\n\n all_len += len(to_save)\n outpath = os.path.join(args.out_dir, data[i] + \".frame\")\n with open(outpath, \"a\") as fout:\n for f in range(len(to_save)):\n fout.write('{0:0.4f}\\n'.format(to_save[f]))\n del test_batch\n if status[i] == 'end' or status[i] == 'single':\n logging.debug(f\"Overall length of prediction of {data[i]} is {all_len}!\")\n all_len = 0\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/asr/vad_infer.py"}], "after_files": [{"content": "# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nDuring inference, we perform frame-level prediction by two approaches: \n 1) shift the window of length time_length (e.g. 0.63s) by shift_length (e.g. 10ms) to generate the frame and use the prediction of the window to represent the label for the frame;\n [this script demonstrate how to do this approach]\n 2) generate predictions with overlapping input segments. Then a smoothing filter is applied to decide the label for a frame spanned by multiple segments. \n [get frame level prediction by this script and use vad_overlap_posterior.py in NeMo/scripts/voice_activity_detection\n One can also find posterior about converting frame level prediction \n to speech/no-speech segment in start and end times format in that script.]\n \n Image https://raw.githubusercontent.com/NVIDIA/NeMo/main/tutorials/asr/images/vad_post_overlap_diagram.png \n will help you understand this method.\n \nUsage:\npython vad_infer.py --vad_model=\"vad_marblenet\" --dataset=<FULL PATH OF MANIFEST TO BE PERFORMED INFERENCE ON> --out_dir='frame/demo' --time_length=0.63\n\n\"\"\"\n\n\nimport json\nimport logging\nimport os\nfrom argparse import ArgumentParser\n\nimport torch\n\nfrom nemo.collections.asr.models import EncDecClassificationModel\nfrom nemo.collections.asr.parts.utils.vad_utils import get_vad_stream_status, prepare_manifest\nfrom nemo.utils import logging\n\ntry:\n from torch.cuda.amp import autocast\nexcept ImportError:\n from contextlib import contextmanager\n\n @contextmanager\n def autocast(enabled=None):\n yield\n\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument(\n \"--vad_model\", type=str, default=\"MatchboxNet-VAD-3x2\", required=False, help=\"Pass: 'MatchboxNet-VAD-3x2'\"\n )\n parser.add_argument(\n \"--dataset\",\n type=str,\n required=True,\n help=\"Path of json file of evaluation data. Audio files should have unique names.\",\n )\n parser.add_argument(\"--out_dir\", type=str, default=\"vad_frame\", help=\"Dir of your vad outputs\")\n parser.add_argument(\"--time_length\", type=float, default=0.63)\n parser.add_argument(\"--shift_length\", type=float, default=0.01)\n parser.add_argument(\"--normalize_audio\", type=bool, default=False)\n parser.add_argument(\"--num_workers\", type=float, default=20)\n parser.add_argument(\"--split_duration\", type=float, default=400)\n parser.add_argument(\n \"--dont_auto_split\",\n default=False,\n action='store_true',\n help=\"Whether to automatically split manifest entry by split_duration to avoid potential CUDA out of memory issue.\",\n )\n\n args = parser.parse_args()\n\n torch.set_grad_enabled(False)\n\n if args.vad_model.endswith('.nemo'):\n logging.info(f\"Using local VAD model from {args.vad_model}\")\n vad_model = EncDecClassificationModel.restore_from(restore_path=args.vad_model)\n else:\n logging.info(f\"Using NGC cloud VAD model {args.vad_model}\")\n vad_model = EncDecClassificationModel.from_pretrained(model_name=args.vad_model)\n\n if not os.path.exists(args.out_dir):\n os.mkdir(args.out_dir)\n\n # Prepare manifest for streaming VAD\n manifest_vad_input = args.dataset\n if not args.dont_auto_split:\n logging.info(\"Split long audio file to avoid CUDA memory issue\")\n logging.debug(\"Try smaller split_duration if you still have CUDA memory issue\")\n config = {\n 'manifest_filepath': manifest_vad_input,\n 'time_length': args.time_length,\n 'split_duration': args.split_duration,\n 'num_workers': args.num_workers,\n }\n manifest_vad_input = prepare_manifest(config)\n else:\n logging.warning(\n \"If you encounter CUDA memory issue, try splitting manifest entry by split_duration to avoid it.\"\n )\n\n # setup_test_data\n vad_model.setup_test_data(\n test_data_config={\n 'vad_stream': True,\n 'sample_rate': 16000,\n 'manifest_filepath': manifest_vad_input,\n 'labels': ['infer',],\n 'num_workers': args.num_workers,\n 'shuffle': False,\n 'time_length': args.time_length,\n 'shift_length': args.shift_length,\n 'trim_silence': False,\n 'normalize_audio': args.normalize_audio,\n }\n )\n\n vad_model = vad_model.to(device)\n vad_model.eval()\n\n time_unit = int(args.time_length / args.shift_length)\n trunc = int(time_unit / 2)\n trunc_l = time_unit - trunc\n all_len = 0\n\n data = []\n for line in open(manifest_vad_input, 'r'):\n file = json.loads(line)['audio_filepath'].split(\"/\")[-1]\n data.append(file.split(\".wav\")[0])\n logging.info(f\"Inference on {len(data)} audio files/json lines!\")\n\n status = get_vad_stream_status(data)\n for i, test_batch in enumerate(vad_model.test_dataloader()):\n test_batch = [x.to(device) for x in test_batch]\n with autocast():\n log_probs = vad_model(input_signal=test_batch[0], input_signal_length=test_batch[1])\n probs = torch.softmax(log_probs, dim=-1)\n pred = probs[:, 1]\n\n if status[i] == 'start':\n to_save = pred[:-trunc]\n elif status[i] == 'next':\n to_save = pred[trunc:-trunc_l]\n elif status[i] == 'end':\n to_save = pred[trunc_l:]\n else:\n to_save = pred\n\n all_len += len(to_save)\n outpath = os.path.join(args.out_dir, data[i] + \".frame\")\n with open(outpath, \"a\") as fout:\n for f in range(len(to_save)):\n fout.write('{0:0.4f}\\n'.format(to_save[f]))\n del test_batch\n if status[i] == 'end' or status[i] == 'single':\n logging.debug(f\"Overall length of prediction of {data[i]} is {all_len}!\")\n all_len = 0\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/asr/vad_infer.py"}]} | 2,683 | 137 |
gh_patches_debug_38538 | rasdani/github-patches | git_diff | sunpy__sunpy-7451 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No map source to handle GONG H-alpha data
### Describe the bug
Originally raised by nawinnga in the sunpy chat room.
There is no map source to do the necessary translation from the GONG H-alpha headers to standard meta data Map can use related to #6653 and #6655.
It doesn't look like its trivial to find/extract/calculate the necessary WCS information (e.g. PCi_j + CDELT or CDi_j ...) without some reference to what the fits headers mean some info on [this NSO page](https://nso.edu/data/nisp-data/h-alpha/)
### To Reproduce
```
from astropy.io import fits
import sunpy
# to get around issue #6655
hdul = fits.open('ftp://gong2.nso.edu/HA/haf/202203/20220318/20220318000050Bh.fits.fz')
sunpy.map.Map((hdul[1].data, hdul[1].header))
Map((hdul[1].data, hdul[1].header))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/map_factory.py", line 331, in __call__
new_map = self._check_registered_widgets(data, meta, **kwargs)
File "/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/map_factory.py", line 381, in _check_registered_widgets
return WidgetType(data, meta, **kwargs)
File "/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/mapbase.py", line 232, in __init__
self._validate_meta()
File "/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/mapbase.py", line 1409, in _validate_meta
raise MapMetaValidationError('\n'.join(err_message))
sunpy.map.mapbase.MapMetaValidationError: Image coordinate units for axis 1 not present in metadata.
Image coordinate units for axis 2 not present in metadata.
See https://docs.sunpy.org/en/stable/code_ref/map.html#fixing-map-metadata for instructions on how to add missing metadata.
```
Setting the cdelt and cunit at least get lets the map be loaded but the WCS is still non-standard
```
hdul[1].header['cdelt1'] = 1
hdul[1].header['cdelt2'] = 1
hdul[1].header['cunit1'] = 'arcsec'
hdul[1].header['cunit2'] = 'arcsec'
mm = Map((hdul[1].data, hdul[1].header))
WARNING: SunpyMetadataWarning: Missing metadata for observer: assuming Earth-based observer.
For frame 'heliographic_stonyhurst' the following metadata is missing: hgln_obs,hglt_obs,dsun_obs
For frame 'heliographic_carrington' the following metadata is missing: crlt_obs,dsun_obs,crln_obs
[sunpy.map.mapbase]
WARNING: SunpyUserWarning: Could not determine coordinate frame from map metadata.
Could not determine celestial frame corresponding to the specified WCS object [sunpy.map.mapbase]
<sunpy.map.mapbase.GenericMap object at 0x1297eb160>
SunPy Map
---------
Observatory: NSO-GONG
Instrument:
Detector:
Measurement: 6562.808
Wavelength: 6562.808
Observation Date: 2022-03-18 00:00:50
Exposure Time: Unknown
Dimension: [2048. 2048.] pix
Coordinate System: Unknown
Scale: [1. 1.] arcsec / pix
Reference Pixel: [1023. 1023.] pix
Reference Coord: [0. 0.] arcsec
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=int16)
```
### Screenshots
_No response_
### System Details
OS: Mac OS 10.16 Arch: 64bit, (i386)
python3.9
sunpy: 4.1.0
astropy: 5.1.1
numpy: 1.23.5
### Installation method
pip
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/map/sources/gong.py`
Content:
```
1 """
2 GONG Map subclass definitions
3 """
4 import numpy as np
5
6 import astropy.units as u
7 from astropy.time import Time
8
9 from sunpy.coordinates import get_earth
10 from sunpy.map import GenericMap
11
12 __all__ = ['GONGSynopticMap']
13
14 from sunpy.map.mapbase import SpatialPair
15
16
17 class GONGSynopticMap(GenericMap):
18 """
19 GONG Synoptic Map.
20
21 The Global Oscillation Network Group (GONG) operates a six-station network of velocity
22 imagers located around the Earth that observe the Sun nearly continuously. GONG
23 produces hourly photospheric magnetograms using the Ni I 676.8 nm spectral line with an
24 array of 242×256 pixels covering the solar disk. These magnetograms are used to derive
25 synoptic maps which show a full-surface picture of the solar magnetic field.
26
27 Notes
28 -----
29 If you have ``pfsspy`` installed this map source will be used instead of the one built into ``pfsspy``.
30
31 References
32 ----------
33 * `GONG Page <https://gong.nso.edu/>`_
34 * `Magnetogram Synoptic Map Images Page <https://gong.nso.edu/data/magmap/>`_
35 * `FITS header keywords <https://gong.nso.edu/data/DMAC_documentation/General/fitsdesc.html>`_
36 * `Instrument Paper (pp. 203–208) <https://inis.iaea.org/collection/NCLCollectionStore/_Public/20/062/20062491.pdf>`_
37 * `GONG+ Documentation <https://gong.nso.edu/data/DMAC_documentation/PipelineMap/GlobalMap.html>`_
38 """
39
40 @classmethod
41 def is_datasource_for(cls, data, header, **kwargs):
42 return (str(header.get('TELESCOP', '')).endswith('GONG') and
43 str(header.get('CTYPE1', '').startswith('CRLN')))
44
45 @property
46 def date(self):
47 return Time(f"{self.meta.get('date-obs')} {self.meta.get('time-obs')}")
48
49 @property
50 def scale(self):
51 # Since, this map uses the cylindrical equal-area (CEA) projection,
52 # the spacing should be modified to 180/pi times the original value
53 # Reference: Section 5.5, Thompson 2006
54 return SpatialPair(self.meta['cdelt1'] * self.spatial_units[0] / u.pixel,
55 self.meta['cdelt2'] * 180 / np.pi * self.spatial_units[0] / u.pixel)
56
57 @property
58 def spatial_units(self):
59 return SpatialPair(u.deg, u.deg)
60
61 @property
62 def observer_coordinate(self):
63 return get_earth(self.date)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/map/sources/gong.py b/sunpy/map/sources/gong.py
--- a/sunpy/map/sources/gong.py
+++ b/sunpy/map/sources/gong.py
@@ -4,15 +4,25 @@
import numpy as np
import astropy.units as u
+from astropy.coordinates import EarthLocation, SkyCoord
from astropy.time import Time
from sunpy.coordinates import get_earth
from sunpy.map import GenericMap
-__all__ = ['GONGSynopticMap']
+__all__ = ['GONGSynopticMap', 'GONGHalphaMap']
from sunpy.map.mapbase import SpatialPair
+_SITE_NAMES = {
+ 'LE': 'Learmonth',
+ 'UD': 'Udaipur',
+ 'TD': 'El Teide',
+ 'CT': 'Cerro Tololo',
+ 'BB': 'Big Bear',
+ 'ML': 'Mauna Loa'
+}
+
class GONGSynopticMap(GenericMap):
"""
@@ -40,7 +50,7 @@
@classmethod
def is_datasource_for(cls, data, header, **kwargs):
return (str(header.get('TELESCOP', '')).endswith('GONG') and
- str(header.get('CTYPE1', '').startswith('CRLN')))
+ str(header.get('CTYPE1', '')).startswith('CRLN'))
@property
def date(self):
@@ -61,3 +71,58 @@
@property
def observer_coordinate(self):
return get_earth(self.date)
+
+
+class GONGHalphaMap(GenericMap):
+ """
+ GONG H-Alpha Map.
+
+ The Global Oscillation Network Group (GONG) operates a six-station network of H-alpha
+ imagers located around the Earth that observe the Sun nearly continuously.
+
+ References
+ ----------
+ * `GONG H-Alpha Page <https://nso.edu/data/nisp-data/h-alpha/>`_
+ * `GONG H-Alpha Observation Details <https://nispdata.nso.edu/webProdDesc2/presenter.php?file=halpha_fulldisk_images_overview.html&echoExact=0&name=Overview%20:%20GONG%20H-alpha%20Full-disk%20Images>`_
+ * `GONG Header Keywords <https://gong.nso.edu/data/HEADER_KEY.html>`_
+ * `DOI:/10.25668/as28-7p13 <https://doi.org/10.25668/as28-7p13>`_
+ """
+
+ @classmethod
+ def is_datasource_for(cls, data, header, **kwargs):
+ return (str(header.get('TELESCOP', '')).endswith('GONG') and
+ str(header.get('IMTYPE', '')).startswith('H-ALPHA'))
+
+
+ @property
+ def scale(self):
+ solar_r = self.meta['SOLAR-R'] * u.arcsec
+ return SpatialPair(solar_r / (self.meta['FNDLMBMI'] * u.pixel),
+ solar_r/ (self.meta['FNDLMBMA'] * u.pixel))
+
+ @property
+ def coordinate_system(self):
+ """
+ Coordinate system used
+
+ Overrides the values in the header which are not understood by Astropy WCS
+ """
+ return SpatialPair("HPLN-TAN", "HPLT-TAN")
+
+ @property
+ def nickname(self):
+ site = _SITE_NAMES.get(self.meta.get("sitename", ""), "UNKNOWN")
+ return f'{self.observatory}, {site}'
+
+ @property
+ def spatial_units(self):
+ return SpatialPair(u.deg, u.deg)
+
+ @property
+ def _earth_location(self):
+ """Location of the observatory on Earth"""
+ return EarthLocation.from_geodetic(lat=self.meta['site-lat'] * u.deg, lon=self.meta['site-lon'] * u.deg)
+
+ @property
+ def observer_coordinate(self):
+ return SkyCoord(self._earth_location.get_itrs(self.date)).heliographic_stonyhurst
| {"golden_diff": "diff --git a/sunpy/map/sources/gong.py b/sunpy/map/sources/gong.py\n--- a/sunpy/map/sources/gong.py\n+++ b/sunpy/map/sources/gong.py\n@@ -4,15 +4,25 @@\n import numpy as np\n \n import astropy.units as u\n+from astropy.coordinates import EarthLocation, SkyCoord\n from astropy.time import Time\n \n from sunpy.coordinates import get_earth\n from sunpy.map import GenericMap\n \n-__all__ = ['GONGSynopticMap']\n+__all__ = ['GONGSynopticMap', 'GONGHalphaMap']\n \n from sunpy.map.mapbase import SpatialPair\n \n+_SITE_NAMES = {\n+ 'LE': 'Learmonth',\n+ 'UD': 'Udaipur',\n+ 'TD': 'El Teide',\n+ 'CT': 'Cerro Tololo',\n+ 'BB': 'Big Bear',\n+ 'ML': 'Mauna Loa'\n+}\n+\n \n class GONGSynopticMap(GenericMap):\n \"\"\"\n@@ -40,7 +50,7 @@\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n return (str(header.get('TELESCOP', '')).endswith('GONG') and\n- str(header.get('CTYPE1', '').startswith('CRLN')))\n+ str(header.get('CTYPE1', '')).startswith('CRLN'))\n \n @property\n def date(self):\n@@ -61,3 +71,58 @@\n @property\n def observer_coordinate(self):\n return get_earth(self.date)\n+\n+\n+class GONGHalphaMap(GenericMap):\n+ \"\"\"\n+ GONG H-Alpha Map.\n+\n+ The Global Oscillation Network Group (GONG) operates a six-station network of H-alpha\n+ imagers located around the Earth that observe the Sun nearly continuously.\n+\n+ References\n+ ----------\n+ * `GONG H-Alpha Page <https://nso.edu/data/nisp-data/h-alpha/>`_\n+ * `GONG H-Alpha Observation Details <https://nispdata.nso.edu/webProdDesc2/presenter.php?file=halpha_fulldisk_images_overview.html&echoExact=0&name=Overview%20:%20GONG%20H-alpha%20Full-disk%20Images>`_\n+ * `GONG Header Keywords <https://gong.nso.edu/data/HEADER_KEY.html>`_\n+ * `DOI:/10.25668/as28-7p13 <https://doi.org/10.25668/as28-7p13>`_\n+ \"\"\"\n+\n+ @classmethod\n+ def is_datasource_for(cls, data, header, **kwargs):\n+ return (str(header.get('TELESCOP', '')).endswith('GONG') and\n+ str(header.get('IMTYPE', '')).startswith('H-ALPHA'))\n+\n+\n+ @property\n+ def scale(self):\n+ solar_r = self.meta['SOLAR-R'] * u.arcsec\n+ return SpatialPair(solar_r / (self.meta['FNDLMBMI'] * u.pixel),\n+ solar_r/ (self.meta['FNDLMBMA'] * u.pixel))\n+\n+ @property\n+ def coordinate_system(self):\n+ \"\"\"\n+ Coordinate system used\n+\n+ Overrides the values in the header which are not understood by Astropy WCS\n+ \"\"\"\n+ return SpatialPair(\"HPLN-TAN\", \"HPLT-TAN\")\n+\n+ @property\n+ def nickname(self):\n+ site = _SITE_NAMES.get(self.meta.get(\"sitename\", \"\"), \"UNKNOWN\")\n+ return f'{self.observatory}, {site}'\n+\n+ @property\n+ def spatial_units(self):\n+ return SpatialPair(u.deg, u.deg)\n+\n+ @property\n+ def _earth_location(self):\n+ \"\"\"Location of the observatory on Earth\"\"\"\n+ return EarthLocation.from_geodetic(lat=self.meta['site-lat'] * u.deg, lon=self.meta['site-lon'] * u.deg)\n+\n+ @property\n+ def observer_coordinate(self):\n+ return SkyCoord(self._earth_location.get_itrs(self.date)).heliographic_stonyhurst\n", "issue": "No map source to handle GONG H-alpha data\n### Describe the bug\r\nOriginally raised by nawinnga in the sunpy chat room.\r\n\r\nThere is no map source to do the necessary translation from the GONG H-alpha headers to standard meta data Map can use related to #6653 and #6655.\r\n\r\nIt doesn't look like its trivial to find/extract/calculate the necessary WCS information (e.g. PCi_j + CDELT or CDi_j ...) without some reference to what the fits headers mean some info on [this NSO page](https://nso.edu/data/nisp-data/h-alpha/)\r\n\r\n### To Reproduce\r\n\r\n```\r\nfrom astropy.io import fits\r\nimport sunpy\r\n\r\n# to get around issue #6655\r\nhdul = fits.open('ftp://gong2.nso.edu/HA/haf/202203/20220318/20220318000050Bh.fits.fz')\r\nsunpy.map.Map((hdul[1].data, hdul[1].header))\r\n\r\nMap((hdul[1].data, hdul[1].header))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/map_factory.py\", line 331, in __call__\r\n new_map = self._check_registered_widgets(data, meta, **kwargs)\r\n File \"/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/map_factory.py\", line 381, in _check_registered_widgets\r\n return WidgetType(data, meta, **kwargs)\r\n File \"/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/mapbase.py\", line 232, in __init__\r\n self._validate_meta()\r\n File \"/Users/shane/.virtualenvs/temp/lib/python3.9/site-packages/sunpy/map/mapbase.py\", line 1409, in _validate_meta\r\n raise MapMetaValidationError('\\n'.join(err_message))\r\nsunpy.map.mapbase.MapMetaValidationError: Image coordinate units for axis 1 not present in metadata.\r\nImage coordinate units for axis 2 not present in metadata.\r\nSee https://docs.sunpy.org/en/stable/code_ref/map.html#fixing-map-metadata for instructions on how to add missing metadata.\r\n```\r\nSetting the cdelt and cunit at least get lets the map be loaded but the WCS is still non-standard\r\n```\r\nhdul[1].header['cdelt1'] = 1\r\nhdul[1].header['cdelt2'] = 1\r\nhdul[1].header['cunit1'] = 'arcsec'\r\nhdul[1].header['cunit2'] = 'arcsec'\r\n\r\nmm = Map((hdul[1].data, hdul[1].header))\r\nWARNING: SunpyMetadataWarning: Missing metadata for observer: assuming Earth-based observer.\r\nFor frame 'heliographic_stonyhurst' the following metadata is missing: hgln_obs,hglt_obs,dsun_obs\r\nFor frame 'heliographic_carrington' the following metadata is missing: crlt_obs,dsun_obs,crln_obs\r\n [sunpy.map.mapbase]\r\nWARNING: SunpyUserWarning: Could not determine coordinate frame from map metadata.\r\nCould not determine celestial frame corresponding to the specified WCS object [sunpy.map.mapbase]\r\n<sunpy.map.mapbase.GenericMap object at 0x1297eb160>\r\nSunPy Map\r\n---------\r\nObservatory:\t\t NSO-GONG\r\nInstrument:\r\nDetector:\r\nMeasurement:\t\t 6562.808\r\nWavelength:\t\t 6562.808\r\nObservation Date:\t 2022-03-18 00:00:50\r\nExposure Time:\t\t Unknown\r\nDimension:\t\t [2048. 2048.] pix\r\nCoordinate System:\t Unknown\r\nScale:\t\t\t [1. 1.] arcsec / pix\r\nReference Pixel:\t [1023. 1023.] pix\r\nReference Coord:\t [0. 0.] arcsec\r\narray([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]], dtype=int16)\r\n```\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### System Details\r\n\r\nOS: Mac OS 10.16 Arch: 64bit, (i386)\r\npython3.9\r\nsunpy: 4.1.0\r\nastropy: 5.1.1\r\nnumpy: 1.23.5\r\n\r\n### Installation method\r\n\r\npip\n", "before_files": [{"content": "\"\"\"\nGONG Map subclass definitions\n\"\"\"\nimport numpy as np\n\nimport astropy.units as u\nfrom astropy.time import Time\n\nfrom sunpy.coordinates import get_earth\nfrom sunpy.map import GenericMap\n\n__all__ = ['GONGSynopticMap']\n\nfrom sunpy.map.mapbase import SpatialPair\n\n\nclass GONGSynopticMap(GenericMap):\n \"\"\"\n GONG Synoptic Map.\n\n The Global Oscillation Network Group (GONG) operates a six-station network of velocity\n imagers located around the Earth that observe the Sun nearly continuously. GONG\n produces hourly photospheric magnetograms using the Ni I 676.8 nm spectral line with an\n array of 242\u00d7256 pixels covering the solar disk. These magnetograms are used to derive\n synoptic maps which show a full-surface picture of the solar magnetic field.\n\n Notes\n -----\n If you have ``pfsspy`` installed this map source will be used instead of the one built into ``pfsspy``.\n\n References\n ----------\n * `GONG Page <https://gong.nso.edu/>`_\n * `Magnetogram Synoptic Map Images Page <https://gong.nso.edu/data/magmap/>`_\n * `FITS header keywords <https://gong.nso.edu/data/DMAC_documentation/General/fitsdesc.html>`_\n * `Instrument Paper (pp. 203\u2013208) <https://inis.iaea.org/collection/NCLCollectionStore/_Public/20/062/20062491.pdf>`_\n * `GONG+ Documentation <https://gong.nso.edu/data/DMAC_documentation/PipelineMap/GlobalMap.html>`_\n \"\"\"\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n return (str(header.get('TELESCOP', '')).endswith('GONG') and\n str(header.get('CTYPE1', '').startswith('CRLN')))\n\n @property\n def date(self):\n return Time(f\"{self.meta.get('date-obs')} {self.meta.get('time-obs')}\")\n\n @property\n def scale(self):\n # Since, this map uses the cylindrical equal-area (CEA) projection,\n # the spacing should be modified to 180/pi times the original value\n # Reference: Section 5.5, Thompson 2006\n return SpatialPair(self.meta['cdelt1'] * self.spatial_units[0] / u.pixel,\n self.meta['cdelt2'] * 180 / np.pi * self.spatial_units[0] / u.pixel)\n\n @property\n def spatial_units(self):\n return SpatialPair(u.deg, u.deg)\n\n @property\n def observer_coordinate(self):\n return get_earth(self.date)\n", "path": "sunpy/map/sources/gong.py"}], "after_files": [{"content": "\"\"\"\nGONG Map subclass definitions\n\"\"\"\nimport numpy as np\n\nimport astropy.units as u\nfrom astropy.coordinates import EarthLocation, SkyCoord\nfrom astropy.time import Time\n\nfrom sunpy.coordinates import get_earth\nfrom sunpy.map import GenericMap\n\n__all__ = ['GONGSynopticMap', 'GONGHalphaMap']\n\nfrom sunpy.map.mapbase import SpatialPair\n\n_SITE_NAMES = {\n 'LE': 'Learmonth',\n 'UD': 'Udaipur',\n 'TD': 'El Teide',\n 'CT': 'Cerro Tololo',\n 'BB': 'Big Bear',\n 'ML': 'Mauna Loa'\n}\n\n\nclass GONGSynopticMap(GenericMap):\n \"\"\"\n GONG Synoptic Map.\n\n The Global Oscillation Network Group (GONG) operates a six-station network of velocity\n imagers located around the Earth that observe the Sun nearly continuously. GONG\n produces hourly photospheric magnetograms using the Ni I 676.8 nm spectral line with an\n array of 242\u00d7256 pixels covering the solar disk. These magnetograms are used to derive\n synoptic maps which show a full-surface picture of the solar magnetic field.\n\n Notes\n -----\n If you have ``pfsspy`` installed this map source will be used instead of the one built into ``pfsspy``.\n\n References\n ----------\n * `GONG Page <https://gong.nso.edu/>`_\n * `Magnetogram Synoptic Map Images Page <https://gong.nso.edu/data/magmap/>`_\n * `FITS header keywords <https://gong.nso.edu/data/DMAC_documentation/General/fitsdesc.html>`_\n * `Instrument Paper (pp. 203\u2013208) <https://inis.iaea.org/collection/NCLCollectionStore/_Public/20/062/20062491.pdf>`_\n * `GONG+ Documentation <https://gong.nso.edu/data/DMAC_documentation/PipelineMap/GlobalMap.html>`_\n \"\"\"\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n return (str(header.get('TELESCOP', '')).endswith('GONG') and\n str(header.get('CTYPE1', '')).startswith('CRLN'))\n\n @property\n def date(self):\n return Time(f\"{self.meta.get('date-obs')} {self.meta.get('time-obs')}\")\n\n @property\n def scale(self):\n # Since, this map uses the cylindrical equal-area (CEA) projection,\n # the spacing should be modified to 180/pi times the original value\n # Reference: Section 5.5, Thompson 2006\n return SpatialPair(self.meta['cdelt1'] * self.spatial_units[0] / u.pixel,\n self.meta['cdelt2'] * 180 / np.pi * self.spatial_units[0] / u.pixel)\n\n @property\n def spatial_units(self):\n return SpatialPair(u.deg, u.deg)\n\n @property\n def observer_coordinate(self):\n return get_earth(self.date)\n\n\nclass GONGHalphaMap(GenericMap):\n \"\"\"\n GONG H-Alpha Map.\n\n The Global Oscillation Network Group (GONG) operates a six-station network of H-alpha\n imagers located around the Earth that observe the Sun nearly continuously.\n\n References\n ----------\n * `GONG H-Alpha Page <https://nso.edu/data/nisp-data/h-alpha/>`_\n * `GONG H-Alpha Observation Details <https://nispdata.nso.edu/webProdDesc2/presenter.php?file=halpha_fulldisk_images_overview.html&echoExact=0&name=Overview%20:%20GONG%20H-alpha%20Full-disk%20Images>`_\n * `GONG Header Keywords <https://gong.nso.edu/data/HEADER_KEY.html>`_\n * `DOI:/10.25668/as28-7p13 <https://doi.org/10.25668/as28-7p13>`_\n \"\"\"\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n return (str(header.get('TELESCOP', '')).endswith('GONG') and\n str(header.get('IMTYPE', '')).startswith('H-ALPHA'))\n\n\n @property\n def scale(self):\n solar_r = self.meta['SOLAR-R'] * u.arcsec\n return SpatialPair(solar_r / (self.meta['FNDLMBMI'] * u.pixel),\n solar_r/ (self.meta['FNDLMBMA'] * u.pixel))\n\n @property\n def coordinate_system(self):\n \"\"\"\n Coordinate system used\n\n Overrides the values in the header which are not understood by Astropy WCS\n \"\"\"\n return SpatialPair(\"HPLN-TAN\", \"HPLT-TAN\")\n\n @property\n def nickname(self):\n site = _SITE_NAMES.get(self.meta.get(\"sitename\", \"\"), \"UNKNOWN\")\n return f'{self.observatory}, {site}'\n\n @property\n def spatial_units(self):\n return SpatialPair(u.deg, u.deg)\n\n @property\n def _earth_location(self):\n \"\"\"Location of the observatory on Earth\"\"\"\n return EarthLocation.from_geodetic(lat=self.meta['site-lat'] * u.deg, lon=self.meta['site-lon'] * u.deg)\n\n @property\n def observer_coordinate(self):\n return SkyCoord(self._earth_location.get_itrs(self.date)).heliographic_stonyhurst\n", "path": "sunpy/map/sources/gong.py"}]} | 2,131 | 954 |
gh_patches_debug_35167 | rasdani/github-patches | git_diff | translate__pootle-4148 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Translation of the Report Email
I would like to translate the words of the report email, if you could integrate this kind of template on the po file, it would be amazing... naturally title of the email included, which it would be `[(name-site)] Unit #(num) ((lang))`
```
Username: (username)
Current URL: (url)
IP address: (ip_address)
User-Agent: (user_agent)
Unit: (url_string)
Source: (source_string)
Current translation:
Your question or comment:
```
Thx in advance ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/contact/views.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from django.core.urlresolvers import reverse
11 from django.views.generic import TemplateView
12
13 from contact_form.views import ContactFormView as OriginalContactFormView
14
15 from pootle.core.views import AjaxResponseMixin
16
17 from .forms import ContactForm, ReportForm
18
19
20 SUBJECT_TEMPLATE = 'Unit #%d (%s)'
21 BODY_TEMPLATE = '''
22 Unit: %s
23
24 Source: %s
25
26 Current translation: %s
27
28 Your question or comment:
29 '''
30
31
32 class ContactFormTemplateView(TemplateView):
33 template_name = 'contact_form/contact_form.html'
34
35
36 class ContactFormView(AjaxResponseMixin, OriginalContactFormView):
37 form_class = ContactForm
38 template_name = 'contact_form/xhr_contact_form.html'
39
40 def get_context_data(self, **kwargs):
41 ctx = super(ContactFormView, self).get_context_data(**kwargs)
42 # Provide the form action URL to use in the template that renders the
43 # contact dialog.
44 ctx.update({
45 'contact_form_url': reverse('pootle-contact-xhr'),
46 })
47 return ctx
48
49 def get_initial(self):
50 initial = super(ContactFormView, self).get_initial()
51
52 user = self.request.user
53 if user.is_authenticated():
54 initial.update({
55 'name': user.full_name,
56 'email': user.email,
57 })
58
59 return initial
60
61 def get_success_url(self):
62 # XXX: This is unused. We don't need a `/contact/sent/` URL, but
63 # the parent :cls:`ContactView` enforces us to set some value here
64 return reverse('pootle-contact')
65
66
67 class ReportFormView(ContactFormView):
68 form_class = ReportForm
69
70 def get_context_data(self, **kwargs):
71 ctx = super(ReportFormView, self).get_context_data(**kwargs)
72 # Provide the form action URL to use in the template that renders the
73 # contact dialog.
74 ctx.update({
75 'contact_form_url': reverse('pootle-contact-report-error'),
76 })
77 return ctx
78
79 def get_initial(self):
80 initial = super(ReportFormView, self).get_initial()
81
82 report = self.request.GET.get('report', False)
83 if report:
84 try:
85 from pootle_store.models import Unit
86 uid = int(report)
87 try:
88 unit = Unit.objects.select_related(
89 'store__translation_project__project',
90 ).get(id=uid)
91 if unit.is_accessible_by(self.request.user):
92 unit_absolute_url = self.request.build_absolute_uri(
93 unit.get_translate_url()
94 )
95 initial.update({
96 'subject': SUBJECT_TEMPLATE % (
97 unit.id,
98 unit.store.translation_project.language.code
99 ),
100 'body': BODY_TEMPLATE % (
101 unit_absolute_url,
102 unit.source,
103 unit.target
104 ),
105 'report_email': unit.store.translation_project \
106 .project.report_email,
107 })
108 except Unit.DoesNotExist:
109 pass
110 except ValueError:
111 pass
112
113 return initial
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/contact/views.py b/pootle/apps/contact/views.py
--- a/pootle/apps/contact/views.py
+++ b/pootle/apps/contact/views.py
@@ -8,6 +8,7 @@
# AUTHORS file for copyright and authorship information.
from django.core.urlresolvers import reverse
+from django.template.loader import render_to_string
from django.views.generic import TemplateView
from contact_form.views import ContactFormView as OriginalContactFormView
@@ -17,18 +18,6 @@
from .forms import ContactForm, ReportForm
-SUBJECT_TEMPLATE = 'Unit #%d (%s)'
-BODY_TEMPLATE = '''
-Unit: %s
-
-Source: %s
-
-Current translation: %s
-
-Your question or comment:
-'''
-
-
class ContactFormTemplateView(TemplateView):
template_name = 'contact_form/contact_form.html'
@@ -93,15 +82,18 @@
unit.get_translate_url()
)
initial.update({
- 'subject': SUBJECT_TEMPLATE % (
- unit.id,
- unit.store.translation_project.language.code
- ),
- 'body': BODY_TEMPLATE % (
- unit_absolute_url,
- unit.source,
- unit.target
- ),
+ 'subject': render_to_string(
+ 'contact_form/report_form_subject.txt', {
+ 'unit': unit,
+ 'language': unit.store \
+ .translation_project \
+ .language.code,
+ }),
+ 'body': render_to_string(
+ 'contact_form/report_form_body.txt', {
+ 'unit': unit,
+ 'unit_absolute_url': unit_absolute_url,
+ }),
'report_email': unit.store.translation_project \
.project.report_email,
})
| {"golden_diff": "diff --git a/pootle/apps/contact/views.py b/pootle/apps/contact/views.py\n--- a/pootle/apps/contact/views.py\n+++ b/pootle/apps/contact/views.py\n@@ -8,6 +8,7 @@\n # AUTHORS file for copyright and authorship information.\n \n from django.core.urlresolvers import reverse\n+from django.template.loader import render_to_string\n from django.views.generic import TemplateView\n \n from contact_form.views import ContactFormView as OriginalContactFormView\n@@ -17,18 +18,6 @@\n from .forms import ContactForm, ReportForm\n \n \n-SUBJECT_TEMPLATE = 'Unit #%d (%s)'\n-BODY_TEMPLATE = '''\n-Unit: %s\n-\n-Source: %s\n-\n-Current translation: %s\n-\n-Your question or comment:\n-'''\n-\n-\n class ContactFormTemplateView(TemplateView):\n template_name = 'contact_form/contact_form.html'\n \n@@ -93,15 +82,18 @@\n unit.get_translate_url()\n )\n initial.update({\n- 'subject': SUBJECT_TEMPLATE % (\n- unit.id,\n- unit.store.translation_project.language.code\n- ),\n- 'body': BODY_TEMPLATE % (\n- unit_absolute_url,\n- unit.source,\n- unit.target\n- ),\n+ 'subject': render_to_string(\n+ 'contact_form/report_form_subject.txt', {\n+ 'unit': unit,\n+ 'language': unit.store \\\n+ .translation_project \\\n+ .language.code,\n+ }),\n+ 'body': render_to_string(\n+ 'contact_form/report_form_body.txt', {\n+ 'unit': unit,\n+ 'unit_absolute_url': unit_absolute_url,\n+ }),\n 'report_email': unit.store.translation_project \\\n .project.report_email,\n })\n", "issue": "Translation of the Report Email\nI would like to translate the words of the report email, if you could integrate this kind of template on the po file, it would be amazing... naturally title of the email included, which it would be `[(name-site)] Unit #(num) ((lang))`\n\n```\nUsername: (username)\nCurrent URL: (url)\nIP address: (ip_address)\nUser-Agent: (user_agent)\n\nUnit: (url_string)\n\nSource: (source_string)\n\nCurrent translation: \n\nYour question or comment:\n```\n\nThx in advance ;)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.urlresolvers import reverse\nfrom django.views.generic import TemplateView\n\nfrom contact_form.views import ContactFormView as OriginalContactFormView\n\nfrom pootle.core.views import AjaxResponseMixin\n\nfrom .forms import ContactForm, ReportForm\n\n\nSUBJECT_TEMPLATE = 'Unit #%d (%s)'\nBODY_TEMPLATE = '''\nUnit: %s\n\nSource: %s\n\nCurrent translation: %s\n\nYour question or comment:\n'''\n\n\nclass ContactFormTemplateView(TemplateView):\n template_name = 'contact_form/contact_form.html'\n\n\nclass ContactFormView(AjaxResponseMixin, OriginalContactFormView):\n form_class = ContactForm\n template_name = 'contact_form/xhr_contact_form.html'\n\n def get_context_data(self, **kwargs):\n ctx = super(ContactFormView, self).get_context_data(**kwargs)\n # Provide the form action URL to use in the template that renders the\n # contact dialog.\n ctx.update({\n 'contact_form_url': reverse('pootle-contact-xhr'),\n })\n return ctx\n\n def get_initial(self):\n initial = super(ContactFormView, self).get_initial()\n\n user = self.request.user\n if user.is_authenticated():\n initial.update({\n 'name': user.full_name,\n 'email': user.email,\n })\n\n return initial\n\n def get_success_url(self):\n # XXX: This is unused. We don't need a `/contact/sent/` URL, but\n # the parent :cls:`ContactView` enforces us to set some value here\n return reverse('pootle-contact')\n\n\nclass ReportFormView(ContactFormView):\n form_class = ReportForm\n\n def get_context_data(self, **kwargs):\n ctx = super(ReportFormView, self).get_context_data(**kwargs)\n # Provide the form action URL to use in the template that renders the\n # contact dialog.\n ctx.update({\n 'contact_form_url': reverse('pootle-contact-report-error'),\n })\n return ctx\n\n def get_initial(self):\n initial = super(ReportFormView, self).get_initial()\n\n report = self.request.GET.get('report', False)\n if report:\n try:\n from pootle_store.models import Unit\n uid = int(report)\n try:\n unit = Unit.objects.select_related(\n 'store__translation_project__project',\n ).get(id=uid)\n if unit.is_accessible_by(self.request.user):\n unit_absolute_url = self.request.build_absolute_uri(\n unit.get_translate_url()\n )\n initial.update({\n 'subject': SUBJECT_TEMPLATE % (\n unit.id,\n unit.store.translation_project.language.code\n ),\n 'body': BODY_TEMPLATE % (\n unit_absolute_url,\n unit.source,\n unit.target\n ),\n 'report_email': unit.store.translation_project \\\n .project.report_email,\n })\n except Unit.DoesNotExist:\n pass\n except ValueError:\n pass\n\n return initial\n", "path": "pootle/apps/contact/views.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.urlresolvers import reverse\nfrom django.template.loader import render_to_string\nfrom django.views.generic import TemplateView\n\nfrom contact_form.views import ContactFormView as OriginalContactFormView\n\nfrom pootle.core.views import AjaxResponseMixin\n\nfrom .forms import ContactForm, ReportForm\n\n\nclass ContactFormTemplateView(TemplateView):\n template_name = 'contact_form/contact_form.html'\n\n\nclass ContactFormView(AjaxResponseMixin, OriginalContactFormView):\n form_class = ContactForm\n template_name = 'contact_form/xhr_contact_form.html'\n\n def get_context_data(self, **kwargs):\n ctx = super(ContactFormView, self).get_context_data(**kwargs)\n # Provide the form action URL to use in the template that renders the\n # contact dialog.\n ctx.update({\n 'contact_form_url': reverse('pootle-contact-xhr'),\n })\n return ctx\n\n def get_initial(self):\n initial = super(ContactFormView, self).get_initial()\n\n user = self.request.user\n if user.is_authenticated():\n initial.update({\n 'name': user.full_name,\n 'email': user.email,\n })\n\n return initial\n\n def get_success_url(self):\n # XXX: This is unused. We don't need a `/contact/sent/` URL, but\n # the parent :cls:`ContactView` enforces us to set some value here\n return reverse('pootle-contact')\n\n\nclass ReportFormView(ContactFormView):\n form_class = ReportForm\n\n def get_context_data(self, **kwargs):\n ctx = super(ReportFormView, self).get_context_data(**kwargs)\n # Provide the form action URL to use in the template that renders the\n # contact dialog.\n ctx.update({\n 'contact_form_url': reverse('pootle-contact-report-error'),\n })\n return ctx\n\n def get_initial(self):\n initial = super(ReportFormView, self).get_initial()\n\n report = self.request.GET.get('report', False)\n if report:\n try:\n from pootle_store.models import Unit\n uid = int(report)\n try:\n unit = Unit.objects.select_related(\n 'store__translation_project__project',\n ).get(id=uid)\n if unit.is_accessible_by(self.request.user):\n unit_absolute_url = self.request.build_absolute_uri(\n unit.get_translate_url()\n )\n initial.update({\n 'subject': render_to_string(\n 'contact_form/report_form_subject.txt', {\n 'unit': unit,\n 'language': unit.store \\\n .translation_project \\\n .language.code,\n }),\n 'body': render_to_string(\n 'contact_form/report_form_body.txt', {\n 'unit': unit,\n 'unit_absolute_url': unit_absolute_url,\n }),\n 'report_email': unit.store.translation_project \\\n .project.report_email,\n })\n except Unit.DoesNotExist:\n pass\n except ValueError:\n pass\n\n return initial\n", "path": "pootle/apps/contact/views.py"}]} | 1,325 | 385 |
gh_patches_debug_16638 | rasdani/github-patches | git_diff | python-poetry__poetry-6338 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`poetry cache clear` no longer respects `--no-interaction` flag
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: Ubuntu 22.04
- **Poetry version**: 1.2.0
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: <!-- Gist Link Here -->
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
Since poetry version 1.2.0, the `poetry cache clear` command no longer respects the `--no-interaction` flag:
```
$ poetry cache clear --all --no-interaction .
Delete 1882 entries? (yes/no) [no] ^C
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/poetry/console/commands/cache/clear.py`
Content:
```
1 from __future__ import annotations
2
3 import os
4
5 from cleo.helpers import argument
6 from cleo.helpers import option
7
8 from poetry.config.config import Config
9 from poetry.console.commands.command import Command
10
11
12 class CacheClearCommand(Command):
13 name = "cache clear"
14 description = "Clears Poetry's cache."
15
16 arguments = [argument("cache", description="The name of the cache to clear.")]
17 options = [option("all", description="Clear all entries in the cache.")]
18
19 def handle(self) -> int:
20 from cachy import CacheManager
21
22 cache = self.argument("cache")
23
24 parts = cache.split(":")
25 root = parts[0]
26
27 config = Config.create()
28 cache_dir = config.repository_cache_directory / root
29
30 try:
31 cache_dir.relative_to(config.repository_cache_directory)
32 except ValueError:
33 raise ValueError(f"{root} is not a valid repository cache")
34
35 cache = CacheManager(
36 {
37 "default": parts[0],
38 "serializer": "json",
39 "stores": {parts[0]: {"driver": "file", "path": str(cache_dir)}},
40 }
41 )
42
43 if len(parts) == 1:
44 if not self.option("all"):
45 raise RuntimeError(
46 f"Add the --all option if you want to clear all {parts[0]} caches"
47 )
48
49 if not cache_dir.exists():
50 self.line(f"No cache entries for {parts[0]}")
51 return 0
52
53 # Calculate number of entries
54 entries_count = sum(
55 len(files) for _path, _dirs, files in os.walk(str(cache_dir))
56 )
57
58 delete = self.confirm(f"<question>Delete {entries_count} entries?</>")
59 if not delete:
60 return 0
61
62 cache.flush()
63 elif len(parts) == 2:
64 raise RuntimeError(
65 "Only specifying the package name is not yet supported. "
66 "Add a specific version to clear"
67 )
68 elif len(parts) == 3:
69 package = parts[1]
70 version = parts[2]
71
72 if not cache.has(f"{package}:{version}"):
73 self.line(f"No cache entries for {package}:{version}")
74 return 0
75
76 delete = self.confirm(f"Delete cache entry {package}:{version}")
77 if not delete:
78 return 0
79
80 cache.forget(f"{package}:{version}")
81 else:
82 raise ValueError("Invalid cache key")
83
84 return 0
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/poetry/console/commands/cache/clear.py b/src/poetry/console/commands/cache/clear.py
--- a/src/poetry/console/commands/cache/clear.py
+++ b/src/poetry/console/commands/cache/clear.py
@@ -55,7 +55,7 @@
len(files) for _path, _dirs, files in os.walk(str(cache_dir))
)
- delete = self.confirm(f"<question>Delete {entries_count} entries?</>")
+ delete = self.confirm(f"<question>Delete {entries_count} entries?</>", True)
if not delete:
return 0
@@ -73,7 +73,7 @@
self.line(f"No cache entries for {package}:{version}")
return 0
- delete = self.confirm(f"Delete cache entry {package}:{version}")
+ delete = self.confirm(f"Delete cache entry {package}:{version}", True)
if not delete:
return 0
| {"golden_diff": "diff --git a/src/poetry/console/commands/cache/clear.py b/src/poetry/console/commands/cache/clear.py\n--- a/src/poetry/console/commands/cache/clear.py\n+++ b/src/poetry/console/commands/cache/clear.py\n@@ -55,7 +55,7 @@\n len(files) for _path, _dirs, files in os.walk(str(cache_dir))\n )\n \n- delete = self.confirm(f\"<question>Delete {entries_count} entries?</>\")\n+ delete = self.confirm(f\"<question>Delete {entries_count} entries?</>\", True)\n if not delete:\n return 0\n \n@@ -73,7 +73,7 @@\n self.line(f\"No cache entries for {package}:{version}\")\n return 0\n \n- delete = self.confirm(f\"Delete cache entry {package}:{version}\")\n+ delete = self.confirm(f\"Delete cache entry {package}:{version}\", True)\n if not delete:\n return 0\n", "issue": "`poetry cache clear` no longer respects `--no-interaction` flag\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: Ubuntu 22.04\r\n- **Poetry version**: 1.2.0\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: <!-- Gist Link Here -->\r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nSince poetry version 1.2.0, the `poetry cache clear` command no longer respects the `--no-interaction` flag:\r\n\r\n```\r\n$ poetry cache clear --all --no-interaction .\r\nDelete 1882 entries? (yes/no) [no] ^C\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom cleo.helpers import argument\nfrom cleo.helpers import option\n\nfrom poetry.config.config import Config\nfrom poetry.console.commands.command import Command\n\n\nclass CacheClearCommand(Command):\n name = \"cache clear\"\n description = \"Clears Poetry's cache.\"\n\n arguments = [argument(\"cache\", description=\"The name of the cache to clear.\")]\n options = [option(\"all\", description=\"Clear all entries in the cache.\")]\n\n def handle(self) -> int:\n from cachy import CacheManager\n\n cache = self.argument(\"cache\")\n\n parts = cache.split(\":\")\n root = parts[0]\n\n config = Config.create()\n cache_dir = config.repository_cache_directory / root\n\n try:\n cache_dir.relative_to(config.repository_cache_directory)\n except ValueError:\n raise ValueError(f\"{root} is not a valid repository cache\")\n\n cache = CacheManager(\n {\n \"default\": parts[0],\n \"serializer\": \"json\",\n \"stores\": {parts[0]: {\"driver\": \"file\", \"path\": str(cache_dir)}},\n }\n )\n\n if len(parts) == 1:\n if not self.option(\"all\"):\n raise RuntimeError(\n f\"Add the --all option if you want to clear all {parts[0]} caches\"\n )\n\n if not cache_dir.exists():\n self.line(f\"No cache entries for {parts[0]}\")\n return 0\n\n # Calculate number of entries\n entries_count = sum(\n len(files) for _path, _dirs, files in os.walk(str(cache_dir))\n )\n\n delete = self.confirm(f\"<question>Delete {entries_count} entries?</>\")\n if not delete:\n return 0\n\n cache.flush()\n elif len(parts) == 2:\n raise RuntimeError(\n \"Only specifying the package name is not yet supported. \"\n \"Add a specific version to clear\"\n )\n elif len(parts) == 3:\n package = parts[1]\n version = parts[2]\n\n if not cache.has(f\"{package}:{version}\"):\n self.line(f\"No cache entries for {package}:{version}\")\n return 0\n\n delete = self.confirm(f\"Delete cache entry {package}:{version}\")\n if not delete:\n return 0\n\n cache.forget(f\"{package}:{version}\")\n else:\n raise ValueError(\"Invalid cache key\")\n\n return 0\n", "path": "src/poetry/console/commands/cache/clear.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport os\n\nfrom cleo.helpers import argument\nfrom cleo.helpers import option\n\nfrom poetry.config.config import Config\nfrom poetry.console.commands.command import Command\n\n\nclass CacheClearCommand(Command):\n name = \"cache clear\"\n description = \"Clears Poetry's cache.\"\n\n arguments = [argument(\"cache\", description=\"The name of the cache to clear.\")]\n options = [option(\"all\", description=\"Clear all entries in the cache.\")]\n\n def handle(self) -> int:\n from cachy import CacheManager\n\n cache = self.argument(\"cache\")\n\n parts = cache.split(\":\")\n root = parts[0]\n\n config = Config.create()\n cache_dir = config.repository_cache_directory / root\n\n try:\n cache_dir.relative_to(config.repository_cache_directory)\n except ValueError:\n raise ValueError(f\"{root} is not a valid repository cache\")\n\n cache = CacheManager(\n {\n \"default\": parts[0],\n \"serializer\": \"json\",\n \"stores\": {parts[0]: {\"driver\": \"file\", \"path\": str(cache_dir)}},\n }\n )\n\n if len(parts) == 1:\n if not self.option(\"all\"):\n raise RuntimeError(\n f\"Add the --all option if you want to clear all {parts[0]} caches\"\n )\n\n if not cache_dir.exists():\n self.line(f\"No cache entries for {parts[0]}\")\n return 0\n\n # Calculate number of entries\n entries_count = sum(\n len(files) for _path, _dirs, files in os.walk(str(cache_dir))\n )\n\n delete = self.confirm(f\"<question>Delete {entries_count} entries?</>\", True)\n if not delete:\n return 0\n\n cache.flush()\n elif len(parts) == 2:\n raise RuntimeError(\n \"Only specifying the package name is not yet supported. \"\n \"Add a specific version to clear\"\n )\n elif len(parts) == 3:\n package = parts[1]\n version = parts[2]\n\n if not cache.has(f\"{package}:{version}\"):\n self.line(f\"No cache entries for {package}:{version}\")\n return 0\n\n delete = self.confirm(f\"Delete cache entry {package}:{version}\", True)\n if not delete:\n return 0\n\n cache.forget(f\"{package}:{version}\")\n else:\n raise ValueError(\"Invalid cache key\")\n\n return 0\n", "path": "src/poetry/console/commands/cache/clear.py"}]} | 1,318 | 210 |
gh_patches_debug_63509 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`BaseDict` does not follow `setdefault`
`mongoengine.base.datastructures.BaseDict` does not follow changes made through `setdefault`.
I have a DictField in a model:
``` python
success_rates = DictField()
```
I update the field using `serdefault` and the changes are not saved:
``` python
user_state.success_rates.setdefault(topic_area_group_id, {}).setdefault(
task_date, {})[str(task_id)] = action.data['is_solved']
```
I currently do this:
``` python
user_state._changed_fields.append('success_rates')
```
`BaseDict` does not follow `setdefault`
`mongoengine.base.datastructures.BaseDict` does not follow changes made through `setdefault`.
I have a DictField in a model:
``` python
success_rates = DictField()
```
I update the field using `serdefault` and the changes are not saved:
``` python
user_state.success_rates.setdefault(topic_area_group_id, {}).setdefault(
task_date, {})[str(task_id)] = action.data['is_solved']
```
I currently do this:
``` python
user_state._changed_fields.append('success_rates')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mongoengine/base/datastructures.py`
Content:
```
1 import weakref
2 import functools
3 import itertools
4 from mongoengine.common import _import_class
5
6 __all__ = ("BaseDict", "BaseList")
7
8
9 class BaseDict(dict):
10 """A special dict so we can watch any changes"""
11
12 _dereferenced = False
13 _instance = None
14 _name = None
15
16 def __init__(self, dict_items, instance, name):
17 Document = _import_class('Document')
18 EmbeddedDocument = _import_class('EmbeddedDocument')
19
20 if isinstance(instance, (Document, EmbeddedDocument)):
21 self._instance = weakref.proxy(instance)
22 self._name = name
23 return super(BaseDict, self).__init__(dict_items)
24
25 def __getitem__(self, key, *args, **kwargs):
26 value = super(BaseDict, self).__getitem__(key)
27
28 EmbeddedDocument = _import_class('EmbeddedDocument')
29 if isinstance(value, EmbeddedDocument) and value._instance is None:
30 value._instance = self._instance
31 elif not isinstance(value, BaseDict) and isinstance(value, dict):
32 value = BaseDict(value, None, '%s.%s' % (self._name, key))
33 super(BaseDict, self).__setitem__(key, value)
34 value._instance = self._instance
35 elif not isinstance(value, BaseList) and isinstance(value, list):
36 value = BaseList(value, None, '%s.%s' % (self._name, key))
37 super(BaseDict, self).__setitem__(key, value)
38 value._instance = self._instance
39 return value
40
41 def __setitem__(self, key, value, *args, **kwargs):
42 self._mark_as_changed(key)
43 return super(BaseDict, self).__setitem__(key, value)
44
45 def __delete__(self, *args, **kwargs):
46 self._mark_as_changed()
47 return super(BaseDict, self).__delete__(*args, **kwargs)
48
49 def __delitem__(self, key, *args, **kwargs):
50 self._mark_as_changed(key)
51 return super(BaseDict, self).__delitem__(key)
52
53 def __delattr__(self, key, *args, **kwargs):
54 self._mark_as_changed(key)
55 return super(BaseDict, self).__delattr__(key)
56
57 def __getstate__(self):
58 self.instance = None
59 self._dereferenced = False
60 return self
61
62 def __setstate__(self, state):
63 self = state
64 return self
65
66 def clear(self, *args, **kwargs):
67 self._mark_as_changed()
68 return super(BaseDict, self).clear(*args, **kwargs)
69
70 def pop(self, *args, **kwargs):
71 self._mark_as_changed()
72 return super(BaseDict, self).pop(*args, **kwargs)
73
74 def popitem(self, *args, **kwargs):
75 self._mark_as_changed()
76 return super(BaseDict, self).popitem(*args, **kwargs)
77
78 def update(self, *args, **kwargs):
79 self._mark_as_changed()
80 return super(BaseDict, self).update(*args, **kwargs)
81
82 def _mark_as_changed(self, key=None):
83 if hasattr(self._instance, '_mark_as_changed'):
84 if key:
85 self._instance._mark_as_changed('%s.%s' % (self._name, key))
86 else:
87 self._instance._mark_as_changed(self._name)
88
89
90 class BaseList(list):
91 """A special list so we can watch any changes
92 """
93
94 _dereferenced = False
95 _instance = None
96 _name = None
97
98 def __init__(self, list_items, instance, name):
99 Document = _import_class('Document')
100 EmbeddedDocument = _import_class('EmbeddedDocument')
101
102 if isinstance(instance, (Document, EmbeddedDocument)):
103 self._instance = weakref.proxy(instance)
104 self._name = name
105 return super(BaseList, self).__init__(list_items)
106
107 def __getitem__(self, key, *args, **kwargs):
108 value = super(BaseList, self).__getitem__(key)
109
110 EmbeddedDocument = _import_class('EmbeddedDocument')
111 if isinstance(value, EmbeddedDocument) and value._instance is None:
112 value._instance = self._instance
113 elif not isinstance(value, BaseDict) and isinstance(value, dict):
114 value = BaseDict(value, None, '%s.%s' % (self._name, key))
115 super(BaseList, self).__setitem__(key, value)
116 value._instance = self._instance
117 elif not isinstance(value, BaseList) and isinstance(value, list):
118 value = BaseList(value, None, '%s.%s' % (self._name, key))
119 super(BaseList, self).__setitem__(key, value)
120 value._instance = self._instance
121 return value
122
123 def __setitem__(self, key, value, *args, **kwargs):
124 if isinstance(key, slice):
125 self._mark_as_changed()
126 else:
127 self._mark_as_changed(key)
128 return super(BaseList, self).__setitem__(key, value)
129
130 def __delitem__(self, key, *args, **kwargs):
131 if isinstance(key, slice):
132 self._mark_as_changed()
133 else:
134 self._mark_as_changed(key)
135 return super(BaseList, self).__delitem__(key)
136
137 def __setslice__(self, *args, **kwargs):
138 self._mark_as_changed()
139 return super(BaseList, self).__setslice__(*args, **kwargs)
140
141 def __delslice__(self, *args, **kwargs):
142 self._mark_as_changed()
143 return super(BaseList, self).__delslice__(*args, **kwargs)
144
145 def __getstate__(self):
146 self.instance = None
147 self._dereferenced = False
148 return self
149
150 def __setstate__(self, state):
151 self = state
152 return self
153
154 def append(self, *args, **kwargs):
155 self._mark_as_changed()
156 return super(BaseList, self).append(*args, **kwargs)
157
158 def extend(self, *args, **kwargs):
159 self._mark_as_changed()
160 return super(BaseList, self).extend(*args, **kwargs)
161
162 def insert(self, *args, **kwargs):
163 self._mark_as_changed()
164 return super(BaseList, self).insert(*args, **kwargs)
165
166 def pop(self, *args, **kwargs):
167 self._mark_as_changed()
168 return super(BaseList, self).pop(*args, **kwargs)
169
170 def remove(self, *args, **kwargs):
171 self._mark_as_changed()
172 return super(BaseList, self).remove(*args, **kwargs)
173
174 def reverse(self, *args, **kwargs):
175 self._mark_as_changed()
176 return super(BaseList, self).reverse(*args, **kwargs)
177
178 def sort(self, *args, **kwargs):
179 self._mark_as_changed()
180 return super(BaseList, self).sort(*args, **kwargs)
181
182 def _mark_as_changed(self, key=None):
183 if hasattr(self._instance, '_mark_as_changed'):
184 if key:
185 self._instance._mark_as_changed('%s.%s' % (self._name, key))
186 else:
187 self._instance._mark_as_changed(self._name)
188
189
190 class StrictDict(object):
191 __slots__ = ()
192 _special_fields = set(['get', 'pop', 'iteritems', 'items', 'keys', 'create'])
193 _classes = {}
194 def __init__(self, **kwargs):
195 for k,v in kwargs.iteritems():
196 setattr(self, k, v)
197 def __getitem__(self, key):
198 key = '_reserved_' + key if key in self._special_fields else key
199 try:
200 return getattr(self, key)
201 except AttributeError:
202 raise KeyError(key)
203 def __setitem__(self, key, value):
204 key = '_reserved_' + key if key in self._special_fields else key
205 return setattr(self, key, value)
206 def __contains__(self, key):
207 return hasattr(self, key)
208 def get(self, key, default=None):
209 try:
210 return self[key]
211 except KeyError:
212 return default
213 def pop(self, key, default=None):
214 v = self.get(key, default)
215 try:
216 delattr(self, key)
217 except AttributeError:
218 pass
219 return v
220 def iteritems(self):
221 for key in self:
222 yield key, self[key]
223 def items(self):
224 return [(k, self[k]) for k in iter(self)]
225 def keys(self):
226 return list(iter(self))
227 def __iter__(self):
228 return (key for key in self.__slots__ if hasattr(self, key))
229 def __len__(self):
230 return len(list(self.iteritems()))
231 def __eq__(self, other):
232 return self.items() == other.items()
233 def __neq__(self, other):
234 return self.items() != other.items()
235
236 @classmethod
237 def create(cls, allowed_keys):
238 allowed_keys_tuple = tuple(('_reserved_' + k if k in cls._special_fields else k) for k in allowed_keys)
239 allowed_keys = frozenset(allowed_keys_tuple)
240 if allowed_keys not in cls._classes:
241 class SpecificStrictDict(cls):
242 __slots__ = allowed_keys_tuple
243 def __repr__(self):
244 return "{%s}" % ', '.join('"{0!s}": {0!r}'.format(k,v) for (k,v) in self.iteritems())
245 cls._classes[allowed_keys] = SpecificStrictDict
246 return cls._classes[allowed_keys]
247
248
249 class SemiStrictDict(StrictDict):
250 __slots__ = ('_extras')
251 _classes = {}
252 def __getattr__(self, attr):
253 try:
254 super(SemiStrictDict, self).__getattr__(attr)
255 except AttributeError:
256 try:
257 return self.__getattribute__('_extras')[attr]
258 except KeyError as e:
259 raise AttributeError(e)
260 def __setattr__(self, attr, value):
261 try:
262 super(SemiStrictDict, self).__setattr__(attr, value)
263 except AttributeError:
264 try:
265 self._extras[attr] = value
266 except AttributeError:
267 self._extras = {attr: value}
268
269 def __delattr__(self, attr):
270 try:
271 super(SemiStrictDict, self).__delattr__(attr)
272 except AttributeError:
273 try:
274 del self._extras[attr]
275 except KeyError as e:
276 raise AttributeError(e)
277
278 def __iter__(self):
279 try:
280 extras_iter = iter(self.__getattribute__('_extras'))
281 except AttributeError:
282 extras_iter = ()
283 return itertools.chain(super(SemiStrictDict, self).__iter__(), extras_iter)
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mongoengine/base/datastructures.py b/mongoengine/base/datastructures.py
--- a/mongoengine/base/datastructures.py
+++ b/mongoengine/base/datastructures.py
@@ -75,6 +75,10 @@
self._mark_as_changed()
return super(BaseDict, self).popitem(*args, **kwargs)
+ def setdefault(self, *args, **kwargs):
+ self._mark_as_changed()
+ return super(BaseDict, self).setdefault(*args, **kwargs)
+
def update(self, *args, **kwargs):
self._mark_as_changed()
return super(BaseDict, self).update(*args, **kwargs)
| {"golden_diff": "diff --git a/mongoengine/base/datastructures.py b/mongoengine/base/datastructures.py\n--- a/mongoengine/base/datastructures.py\n+++ b/mongoengine/base/datastructures.py\n@@ -75,6 +75,10 @@\n self._mark_as_changed()\n return super(BaseDict, self).popitem(*args, **kwargs)\n \n+ def setdefault(self, *args, **kwargs):\n+ self._mark_as_changed()\n+ return super(BaseDict, self).setdefault(*args, **kwargs)\n+\n def update(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).update(*args, **kwargs)\n", "issue": "`BaseDict` does not follow `setdefault`\n`mongoengine.base.datastructures.BaseDict` does not follow changes made through `setdefault`.\n\nI have a DictField in a model:\n\n``` python\n success_rates = DictField()\n```\n\nI update the field using `serdefault` and the changes are not saved:\n\n``` python\n user_state.success_rates.setdefault(topic_area_group_id, {}).setdefault(\n task_date, {})[str(task_id)] = action.data['is_solved']\n```\n\nI currently do this:\n\n``` python\n user_state._changed_fields.append('success_rates')\n```\n\n`BaseDict` does not follow `setdefault`\n`mongoengine.base.datastructures.BaseDict` does not follow changes made through `setdefault`.\n\nI have a DictField in a model:\n\n``` python\n success_rates = DictField()\n```\n\nI update the field using `serdefault` and the changes are not saved:\n\n``` python\n user_state.success_rates.setdefault(topic_area_group_id, {}).setdefault(\n task_date, {})[str(task_id)] = action.data['is_solved']\n```\n\nI currently do this:\n\n``` python\n user_state._changed_fields.append('success_rates')\n```\n\n", "before_files": [{"content": "import weakref\nimport functools\nimport itertools\nfrom mongoengine.common import _import_class\n\n__all__ = (\"BaseDict\", \"BaseList\")\n\n\nclass BaseDict(dict):\n \"\"\"A special dict so we can watch any changes\"\"\"\n\n _dereferenced = False\n _instance = None\n _name = None\n\n def __init__(self, dict_items, instance, name):\n Document = _import_class('Document')\n EmbeddedDocument = _import_class('EmbeddedDocument')\n\n if isinstance(instance, (Document, EmbeddedDocument)):\n self._instance = weakref.proxy(instance)\n self._name = name\n return super(BaseDict, self).__init__(dict_items)\n\n def __getitem__(self, key, *args, **kwargs):\n value = super(BaseDict, self).__getitem__(key)\n\n EmbeddedDocument = _import_class('EmbeddedDocument')\n if isinstance(value, EmbeddedDocument) and value._instance is None:\n value._instance = self._instance\n elif not isinstance(value, BaseDict) and isinstance(value, dict):\n value = BaseDict(value, None, '%s.%s' % (self._name, key))\n super(BaseDict, self).__setitem__(key, value)\n value._instance = self._instance\n elif not isinstance(value, BaseList) and isinstance(value, list):\n value = BaseList(value, None, '%s.%s' % (self._name, key))\n super(BaseDict, self).__setitem__(key, value)\n value._instance = self._instance\n return value\n\n def __setitem__(self, key, value, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__setitem__(key, value)\n\n def __delete__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).__delete__(*args, **kwargs)\n\n def __delitem__(self, key, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__delitem__(key)\n\n def __delattr__(self, key, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__delattr__(key)\n\n def __getstate__(self):\n self.instance = None\n self._dereferenced = False\n return self\n\n def __setstate__(self, state):\n self = state\n return self\n\n def clear(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).clear(*args, **kwargs)\n\n def pop(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).pop(*args, **kwargs)\n\n def popitem(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).popitem(*args, **kwargs)\n\n def update(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).update(*args, **kwargs)\n\n def _mark_as_changed(self, key=None):\n if hasattr(self._instance, '_mark_as_changed'):\n if key:\n self._instance._mark_as_changed('%s.%s' % (self._name, key))\n else:\n self._instance._mark_as_changed(self._name)\n\n\nclass BaseList(list):\n \"\"\"A special list so we can watch any changes\n \"\"\"\n\n _dereferenced = False\n _instance = None\n _name = None\n\n def __init__(self, list_items, instance, name):\n Document = _import_class('Document')\n EmbeddedDocument = _import_class('EmbeddedDocument')\n\n if isinstance(instance, (Document, EmbeddedDocument)):\n self._instance = weakref.proxy(instance)\n self._name = name\n return super(BaseList, self).__init__(list_items)\n\n def __getitem__(self, key, *args, **kwargs):\n value = super(BaseList, self).__getitem__(key)\n\n EmbeddedDocument = _import_class('EmbeddedDocument')\n if isinstance(value, EmbeddedDocument) and value._instance is None:\n value._instance = self._instance\n elif not isinstance(value, BaseDict) and isinstance(value, dict):\n value = BaseDict(value, None, '%s.%s' % (self._name, key))\n super(BaseList, self).__setitem__(key, value)\n value._instance = self._instance\n elif not isinstance(value, BaseList) and isinstance(value, list):\n value = BaseList(value, None, '%s.%s' % (self._name, key))\n super(BaseList, self).__setitem__(key, value)\n value._instance = self._instance\n return value\n\n def __setitem__(self, key, value, *args, **kwargs):\n if isinstance(key, slice):\n self._mark_as_changed()\n else:\n self._mark_as_changed(key)\n return super(BaseList, self).__setitem__(key, value)\n\n def __delitem__(self, key, *args, **kwargs):\n if isinstance(key, slice):\n self._mark_as_changed()\n else:\n self._mark_as_changed(key)\n return super(BaseList, self).__delitem__(key)\n\n def __setslice__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).__setslice__(*args, **kwargs)\n\n def __delslice__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).__delslice__(*args, **kwargs)\n\n def __getstate__(self):\n self.instance = None\n self._dereferenced = False\n return self\n\n def __setstate__(self, state):\n self = state\n return self\n\n def append(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).append(*args, **kwargs)\n\n def extend(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).extend(*args, **kwargs)\n\n def insert(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).insert(*args, **kwargs)\n\n def pop(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).pop(*args, **kwargs)\n\n def remove(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).remove(*args, **kwargs)\n\n def reverse(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).reverse(*args, **kwargs)\n\n def sort(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).sort(*args, **kwargs)\n\n def _mark_as_changed(self, key=None):\n if hasattr(self._instance, '_mark_as_changed'):\n if key:\n self._instance._mark_as_changed('%s.%s' % (self._name, key))\n else:\n self._instance._mark_as_changed(self._name)\n\n\nclass StrictDict(object):\n __slots__ = ()\n _special_fields = set(['get', 'pop', 'iteritems', 'items', 'keys', 'create'])\n _classes = {}\n def __init__(self, **kwargs):\n for k,v in kwargs.iteritems():\n setattr(self, k, v)\n def __getitem__(self, key):\n key = '_reserved_' + key if key in self._special_fields else key\n try:\n return getattr(self, key)\n except AttributeError:\n raise KeyError(key)\n def __setitem__(self, key, value):\n key = '_reserved_' + key if key in self._special_fields else key\n return setattr(self, key, value)\n def __contains__(self, key):\n return hasattr(self, key)\n def get(self, key, default=None):\n try:\n return self[key]\n except KeyError:\n return default\n def pop(self, key, default=None):\n v = self.get(key, default)\n try:\n delattr(self, key)\n except AttributeError:\n pass\n return v\n def iteritems(self):\n for key in self:\n yield key, self[key]\n def items(self):\n return [(k, self[k]) for k in iter(self)]\n def keys(self):\n return list(iter(self))\n def __iter__(self):\n return (key for key in self.__slots__ if hasattr(self, key))\n def __len__(self):\n return len(list(self.iteritems()))\n def __eq__(self, other):\n return self.items() == other.items()\n def __neq__(self, other):\n return self.items() != other.items()\n\n @classmethod\n def create(cls, allowed_keys):\n allowed_keys_tuple = tuple(('_reserved_' + k if k in cls._special_fields else k) for k in allowed_keys)\n allowed_keys = frozenset(allowed_keys_tuple)\n if allowed_keys not in cls._classes:\n class SpecificStrictDict(cls):\n __slots__ = allowed_keys_tuple\n def __repr__(self):\n return \"{%s}\" % ', '.join('\"{0!s}\": {0!r}'.format(k,v) for (k,v) in self.iteritems())\n cls._classes[allowed_keys] = SpecificStrictDict\n return cls._classes[allowed_keys]\n\n\nclass SemiStrictDict(StrictDict):\n __slots__ = ('_extras')\n _classes = {}\n def __getattr__(self, attr):\n try:\n super(SemiStrictDict, self).__getattr__(attr)\n except AttributeError:\n try:\n return self.__getattribute__('_extras')[attr]\n except KeyError as e:\n raise AttributeError(e)\n def __setattr__(self, attr, value):\n try:\n super(SemiStrictDict, self).__setattr__(attr, value)\n except AttributeError:\n try:\n self._extras[attr] = value\n except AttributeError:\n self._extras = {attr: value}\n\n def __delattr__(self, attr):\n try:\n super(SemiStrictDict, self).__delattr__(attr)\n except AttributeError:\n try:\n del self._extras[attr]\n except KeyError as e:\n raise AttributeError(e)\n\n def __iter__(self):\n try:\n extras_iter = iter(self.__getattribute__('_extras'))\n except AttributeError:\n extras_iter = ()\n return itertools.chain(super(SemiStrictDict, self).__iter__(), extras_iter)\n", "path": "mongoengine/base/datastructures.py"}], "after_files": [{"content": "import weakref\nimport functools\nimport itertools\nfrom mongoengine.common import _import_class\n\n__all__ = (\"BaseDict\", \"BaseList\")\n\n\nclass BaseDict(dict):\n \"\"\"A special dict so we can watch any changes\"\"\"\n\n _dereferenced = False\n _instance = None\n _name = None\n\n def __init__(self, dict_items, instance, name):\n Document = _import_class('Document')\n EmbeddedDocument = _import_class('EmbeddedDocument')\n\n if isinstance(instance, (Document, EmbeddedDocument)):\n self._instance = weakref.proxy(instance)\n self._name = name\n return super(BaseDict, self).__init__(dict_items)\n\n def __getitem__(self, key, *args, **kwargs):\n value = super(BaseDict, self).__getitem__(key)\n\n EmbeddedDocument = _import_class('EmbeddedDocument')\n if isinstance(value, EmbeddedDocument) and value._instance is None:\n value._instance = self._instance\n elif not isinstance(value, BaseDict) and isinstance(value, dict):\n value = BaseDict(value, None, '%s.%s' % (self._name, key))\n super(BaseDict, self).__setitem__(key, value)\n value._instance = self._instance\n elif not isinstance(value, BaseList) and isinstance(value, list):\n value = BaseList(value, None, '%s.%s' % (self._name, key))\n super(BaseDict, self).__setitem__(key, value)\n value._instance = self._instance\n return value\n\n def __setitem__(self, key, value, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__setitem__(key, value)\n\n def __delete__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).__delete__(*args, **kwargs)\n\n def __delitem__(self, key, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__delitem__(key)\n\n def __delattr__(self, key, *args, **kwargs):\n self._mark_as_changed(key)\n return super(BaseDict, self).__delattr__(key)\n\n def __getstate__(self):\n self.instance = None\n self._dereferenced = False\n return self\n\n def __setstate__(self, state):\n self = state\n return self\n\n def clear(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).clear(*args, **kwargs)\n\n def pop(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).pop(*args, **kwargs)\n\n def popitem(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).popitem(*args, **kwargs)\n\n def setdefault(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).setdefault(*args, **kwargs)\n\n def update(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseDict, self).update(*args, **kwargs)\n\n def _mark_as_changed(self, key=None):\n if hasattr(self._instance, '_mark_as_changed'):\n if key:\n self._instance._mark_as_changed('%s.%s' % (self._name, key))\n else:\n self._instance._mark_as_changed(self._name)\n\n\nclass BaseList(list):\n \"\"\"A special list so we can watch any changes\n \"\"\"\n\n _dereferenced = False\n _instance = None\n _name = None\n\n def __init__(self, list_items, instance, name):\n Document = _import_class('Document')\n EmbeddedDocument = _import_class('EmbeddedDocument')\n\n if isinstance(instance, (Document, EmbeddedDocument)):\n self._instance = weakref.proxy(instance)\n self._name = name\n return super(BaseList, self).__init__(list_items)\n\n def __getitem__(self, key, *args, **kwargs):\n value = super(BaseList, self).__getitem__(key)\n\n EmbeddedDocument = _import_class('EmbeddedDocument')\n if isinstance(value, EmbeddedDocument) and value._instance is None:\n value._instance = self._instance\n elif not isinstance(value, BaseDict) and isinstance(value, dict):\n value = BaseDict(value, None, '%s.%s' % (self._name, key))\n super(BaseList, self).__setitem__(key, value)\n value._instance = self._instance\n elif not isinstance(value, BaseList) and isinstance(value, list):\n value = BaseList(value, None, '%s.%s' % (self._name, key))\n super(BaseList, self).__setitem__(key, value)\n value._instance = self._instance\n return value\n\n def __setitem__(self, key, value, *args, **kwargs):\n if isinstance(key, slice):\n self._mark_as_changed()\n else:\n self._mark_as_changed(key)\n return super(BaseList, self).__setitem__(key, value)\n\n def __delitem__(self, key, *args, **kwargs):\n if isinstance(key, slice):\n self._mark_as_changed()\n else:\n self._mark_as_changed(key)\n return super(BaseList, self).__delitem__(key)\n\n def __setslice__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).__setslice__(*args, **kwargs)\n\n def __delslice__(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).__delslice__(*args, **kwargs)\n\n def __getstate__(self):\n self.instance = None\n self._dereferenced = False\n return self\n\n def __setstate__(self, state):\n self = state\n return self\n\n def append(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).append(*args, **kwargs)\n\n def extend(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).extend(*args, **kwargs)\n\n def insert(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).insert(*args, **kwargs)\n\n def pop(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).pop(*args, **kwargs)\n\n def remove(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).remove(*args, **kwargs)\n\n def reverse(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).reverse(*args, **kwargs)\n\n def sort(self, *args, **kwargs):\n self._mark_as_changed()\n return super(BaseList, self).sort(*args, **kwargs)\n\n def _mark_as_changed(self, key=None):\n if hasattr(self._instance, '_mark_as_changed'):\n if key:\n self._instance._mark_as_changed('%s.%s' % (self._name, key))\n else:\n self._instance._mark_as_changed(self._name)\n\n\nclass StrictDict(object):\n __slots__ = ()\n _special_fields = set(['get', 'pop', 'iteritems', 'items', 'keys', 'create'])\n _classes = {}\n def __init__(self, **kwargs):\n for k,v in kwargs.iteritems():\n setattr(self, k, v)\n def __getitem__(self, key):\n key = '_reserved_' + key if key in self._special_fields else key\n try:\n return getattr(self, key)\n except AttributeError:\n raise KeyError(key)\n def __setitem__(self, key, value):\n key = '_reserved_' + key if key in self._special_fields else key\n return setattr(self, key, value)\n def __contains__(self, key):\n return hasattr(self, key)\n def get(self, key, default=None):\n try:\n return self[key]\n except KeyError:\n return default\n def pop(self, key, default=None):\n v = self.get(key, default)\n try:\n delattr(self, key)\n except AttributeError:\n pass\n return v\n def iteritems(self):\n for key in self:\n yield key, self[key]\n def items(self):\n return [(k, self[k]) for k in iter(self)]\n def keys(self):\n return list(iter(self))\n def __iter__(self):\n return (key for key in self.__slots__ if hasattr(self, key))\n def __len__(self):\n return len(list(self.iteritems()))\n def __eq__(self, other):\n return self.items() == other.items()\n def __neq__(self, other):\n return self.items() != other.items()\n\n @classmethod\n def create(cls, allowed_keys):\n allowed_keys_tuple = tuple(('_reserved_' + k if k in cls._special_fields else k) for k in allowed_keys)\n allowed_keys = frozenset(allowed_keys_tuple)\n if allowed_keys not in cls._classes:\n class SpecificStrictDict(cls):\n __slots__ = allowed_keys_tuple\n def __repr__(self):\n return \"{%s}\" % ', '.join('\"{0!s}\": {0!r}'.format(k,v) for (k,v) in self.iteritems())\n cls._classes[allowed_keys] = SpecificStrictDict\n return cls._classes[allowed_keys]\n\n\nclass SemiStrictDict(StrictDict):\n __slots__ = ('_extras')\n _classes = {}\n def __getattr__(self, attr):\n try:\n super(SemiStrictDict, self).__getattr__(attr)\n except AttributeError:\n try:\n return self.__getattribute__('_extras')[attr]\n except KeyError as e:\n raise AttributeError(e)\n def __setattr__(self, attr, value):\n try:\n super(SemiStrictDict, self).__setattr__(attr, value)\n except AttributeError:\n try:\n self._extras[attr] = value\n except AttributeError:\n self._extras = {attr: value}\n\n def __delattr__(self, attr):\n try:\n super(SemiStrictDict, self).__delattr__(attr)\n except AttributeError:\n try:\n del self._extras[attr]\n except KeyError as e:\n raise AttributeError(e)\n\n def __iter__(self):\n try:\n extras_iter = iter(self.__getattribute__('_extras'))\n except AttributeError:\n extras_iter = ()\n return itertools.chain(super(SemiStrictDict, self).__iter__(), extras_iter)\n", "path": "mongoengine/base/datastructures.py"}]} | 3,593 | 149 |
gh_patches_debug_890 | rasdani/github-patches | git_diff | falconry__falcon-801 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default OPTIONS responder does not set Content-Length to "0"
Per RFC 7231:
> A server MUST generate a Content-Length field with a value of "0" if no payload body is to be sent in the response.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/responders.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from falcon.errors import HTTPBadRequest
16 from falcon.errors import HTTPNotFound
17 from falcon.status_codes import HTTP_204
18 from falcon.status_codes import HTTP_405
19
20
21 def path_not_found(req, resp, **kwargs):
22 """Raise 404 HTTPNotFound error"""
23 raise HTTPNotFound()
24
25
26 def bad_request(req, resp, **kwargs):
27 """Raise 400 HTTPBadRequest error"""
28 raise HTTPBadRequest('Bad request', 'Invalid HTTP method')
29
30
31 def create_method_not_allowed(allowed_methods):
32 """Creates a responder for "405 Method Not Allowed"
33
34 Args:
35 allowed_methods: A list of HTTP methods (uppercase) that should be
36 returned in the Allow header.
37
38 """
39 allowed = ', '.join(allowed_methods)
40
41 def method_not_allowed(req, resp, **kwargs):
42 resp.status = HTTP_405
43 resp.set_header('Allow', allowed)
44
45 return method_not_allowed
46
47
48 def create_default_options(allowed_methods):
49 """Creates a default responder for the OPTIONS method
50
51 Args:
52 allowed_methods: A list of HTTP methods (uppercase) that should be
53 returned in the Allow header.
54
55 """
56 allowed = ', '.join(allowed_methods)
57
58 def on_options(req, resp, **kwargs):
59 resp.status = HTTP_204
60 resp.set_header('Allow', allowed)
61
62 return on_options
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/falcon/responders.py b/falcon/responders.py
--- a/falcon/responders.py
+++ b/falcon/responders.py
@@ -58,5 +58,6 @@
def on_options(req, resp, **kwargs):
resp.status = HTTP_204
resp.set_header('Allow', allowed)
+ resp.set_header('Content-Length', '0')
return on_options
| {"golden_diff": "diff --git a/falcon/responders.py b/falcon/responders.py\n--- a/falcon/responders.py\n+++ b/falcon/responders.py\n@@ -58,5 +58,6 @@\n def on_options(req, resp, **kwargs):\n resp.status = HTTP_204\n resp.set_header('Allow', allowed)\n+ resp.set_header('Content-Length', '0')\n \n return on_options\n", "issue": "Default OPTIONS responder does not set Content-Length to \"0\"\nPer RFC 7231:\n\n> A server MUST generate a Content-Length field with a value of \"0\" if no payload body is to be sent in the response.\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.errors import HTTPBadRequest\nfrom falcon.errors import HTTPNotFound\nfrom falcon.status_codes import HTTP_204\nfrom falcon.status_codes import HTTP_405\n\n\ndef path_not_found(req, resp, **kwargs):\n \"\"\"Raise 404 HTTPNotFound error\"\"\"\n raise HTTPNotFound()\n\n\ndef bad_request(req, resp, **kwargs):\n \"\"\"Raise 400 HTTPBadRequest error\"\"\"\n raise HTTPBadRequest('Bad request', 'Invalid HTTP method')\n\n\ndef create_method_not_allowed(allowed_methods):\n \"\"\"Creates a responder for \"405 Method Not Allowed\"\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def method_not_allowed(req, resp, **kwargs):\n resp.status = HTTP_405\n resp.set_header('Allow', allowed)\n\n return method_not_allowed\n\n\ndef create_default_options(allowed_methods):\n \"\"\"Creates a default responder for the OPTIONS method\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def on_options(req, resp, **kwargs):\n resp.status = HTTP_204\n resp.set_header('Allow', allowed)\n\n return on_options\n", "path": "falcon/responders.py"}], "after_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.errors import HTTPBadRequest\nfrom falcon.errors import HTTPNotFound\nfrom falcon.status_codes import HTTP_204\nfrom falcon.status_codes import HTTP_405\n\n\ndef path_not_found(req, resp, **kwargs):\n \"\"\"Raise 404 HTTPNotFound error\"\"\"\n raise HTTPNotFound()\n\n\ndef bad_request(req, resp, **kwargs):\n \"\"\"Raise 400 HTTPBadRequest error\"\"\"\n raise HTTPBadRequest('Bad request', 'Invalid HTTP method')\n\n\ndef create_method_not_allowed(allowed_methods):\n \"\"\"Creates a responder for \"405 Method Not Allowed\"\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def method_not_allowed(req, resp, **kwargs):\n resp.status = HTTP_405\n resp.set_header('Allow', allowed)\n\n return method_not_allowed\n\n\ndef create_default_options(allowed_methods):\n \"\"\"Creates a default responder for the OPTIONS method\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def on_options(req, resp, **kwargs):\n resp.status = HTTP_204\n resp.set_header('Allow', allowed)\n resp.set_header('Content-Length', '0')\n\n return on_options\n", "path": "falcon/responders.py"}]} | 867 | 92 |
gh_patches_debug_8506 | rasdani/github-patches | git_diff | nipy__nipype-2422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ICC: Cannot set the undefined 'sessions_F_map' attribute of a 'ICCOutputSpec' object
### Summary
Trying to run ICC raises a `TraitError` with the following description: "TraitError: Cannot set the undefined 'sessions_F_map' attribute of a 'ICCOutputSpec' object."
I may be doing things wrong, in which case a more informative error message would be welcome. Please look at the code snippet under _Script/Workflow Details_.
_Note:_ I do get maps (icc_map.nii, session_var_map.nii, subject_var_map.nii) as an output in my `os.getcwd()`.
### Actual behavior
ICC triggers an exception.
### Expected behavior
ICC runs smoothly.
### How to replicate the behavior
```bash
mkdir /Users/user/test
mv mask.nii.gz sub-{01,02}_ses-{01,02}.nii.gz /Users/user/test
# Where these niftis are sensible "ICC compatible" files
# And now run code pasted below
```
### Script/Workflow details
Please put URL to code or code here (if not too long).
```python
import os.path
from nipype.algorithms import icc
project_dir = '/Users/user/test/'
def fname(sub, ses):
return os.path.join(project_dir, f'sub-{sub}_ses-{ses}.nii.gz')
lst = [[fname(1, 1), fname(1, 2)],
[fname(2, 1), fname(2, 2)]]
mask = os.path.join(project_dir, 'mask.nii.gz')
x = icc.ICC(subjects_sessions=lst, mask=mask)
x.run()
```
### Platform details:
Please paste the output of: `python -c "import nipype; print(nipype.get_info()); print(nipype.__version__)"`
```python
{'commit_hash': '%h',
'commit_source': 'archive substitution',
'networkx_version': '2.0',
'nibabel_version': '2.2.1',
'nipype_version': '1.0.0',
'numpy_version': '1.13.3',
'pkg_path': '/Users/user/anaconda3/lib/python3.6/site-packages/nipype',
'scipy_version': '1.0.0',
'sys_executable': '/Users/user/anaconda3/bin/python',
'sys_platform': 'darwin',
'sys_version': '3.6.4 | packaged by conda-forge | (default, Dec 23 2017, 16:54:01) \n[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]',
'traits_version': '4.6.0'}
```
### Execution environment
Choose one
- My python environment outside container
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/algorithms/icc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import (print_function, division, unicode_literals,
3 absolute_import)
4 from builtins import range
5 import os
6 import numpy as np
7 from numpy import ones, kron, mean, eye, hstack, dot, tile
8 import nibabel as nb
9 from scipy.linalg import pinv
10 from ..interfaces.base import BaseInterfaceInputSpec, TraitedSpec, \
11 BaseInterface, traits, File
12 from ..utils import NUMPY_MMAP
13
14
15 class ICCInputSpec(BaseInterfaceInputSpec):
16 subjects_sessions = traits.List(
17 traits.List(File(exists=True)),
18 desc="n subjects m sessions 3D stat files",
19 mandatory=True)
20 mask = File(exists=True, mandatory=True)
21
22
23 class ICCOutputSpec(TraitedSpec):
24 icc_map = File(exists=True)
25 session_var_map = File(exists=True, desc="variance between sessions")
26 subject_var_map = File(exists=True, desc="variance between subjects")
27
28
29 class ICC(BaseInterface):
30 '''
31 Calculates Interclass Correlation Coefficient (3,1) as defined in
32 P. E. Shrout & Joseph L. Fleiss (1979). "Intraclass Correlations: Uses in
33 Assessing Rater Reliability". Psychological Bulletin 86 (2): 420-428. This
34 particular implementation is aimed at relaibility (test-retest) studies.
35 '''
36 input_spec = ICCInputSpec
37 output_spec = ICCOutputSpec
38
39 def _run_interface(self, runtime):
40 maskdata = nb.load(self.inputs.mask).get_data()
41 maskdata = np.logical_not(
42 np.logical_or(maskdata == 0, np.isnan(maskdata)))
43
44 session_datas = [[
45 nb.load(fname, mmap=NUMPY_MMAP).get_data()[maskdata].reshape(
46 -1, 1) for fname in sessions
47 ] for sessions in self.inputs.subjects_sessions]
48 list_of_sessions = [
49 np.dstack(session_data) for session_data in session_datas
50 ]
51 all_data = np.hstack(list_of_sessions)
52 icc = np.zeros(session_datas[0][0].shape)
53 session_F = np.zeros(session_datas[0][0].shape)
54 session_var = np.zeros(session_datas[0][0].shape)
55 subject_var = np.zeros(session_datas[0][0].shape)
56
57 for x in range(icc.shape[0]):
58 Y = all_data[x, :, :]
59 icc[x], subject_var[x], session_var[x], session_F[
60 x], _, _ = ICC_rep_anova(Y)
61
62 nim = nb.load(self.inputs.subjects_sessions[0][0])
63 new_data = np.zeros(nim.shape)
64 new_data[maskdata] = icc.reshape(-1, )
65 new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)
66 nb.save(new_img, 'icc_map.nii')
67
68 new_data = np.zeros(nim.shape)
69 new_data[maskdata] = session_var.reshape(-1, )
70 new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)
71 nb.save(new_img, 'session_var_map.nii')
72
73 new_data = np.zeros(nim.shape)
74 new_data[maskdata] = subject_var.reshape(-1, )
75 new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)
76 nb.save(new_img, 'subject_var_map.nii')
77
78 return runtime
79
80 def _list_outputs(self):
81 outputs = self._outputs().get()
82 outputs['icc_map'] = os.path.abspath('icc_map.nii')
83 outputs['sessions_F_map'] = os.path.abspath('sessions_F_map.nii')
84 outputs['session_var_map'] = os.path.abspath('session_var_map.nii')
85 outputs['subject_var_map'] = os.path.abspath('subject_var_map.nii')
86 return outputs
87
88
89 def ICC_rep_anova(Y):
90 '''
91 the data Y are entered as a 'table' ie subjects are in rows and repeated
92 measures in columns
93
94 One Sample Repeated measure ANOVA
95
96 Y = XB + E with X = [FaTor / Subjects]
97 '''
98
99 [nb_subjects, nb_conditions] = Y.shape
100 dfc = nb_conditions - 1
101 dfe = (nb_subjects - 1) * dfc
102 dfr = nb_subjects - 1
103
104 # Compute the repeated measure effect
105 # ------------------------------------
106
107 # Sum Square Total
108 mean_Y = mean(Y)
109 SST = ((Y - mean_Y)**2).sum()
110
111 # create the design matrix for the different levels
112 x = kron(eye(nb_conditions), ones((nb_subjects, 1))) # sessions
113 x0 = tile(eye(nb_subjects), (nb_conditions, 1)) # subjects
114 X = hstack([x, x0])
115
116 # Sum Square Error
117 predicted_Y = dot(dot(dot(X, pinv(dot(X.T, X))), X.T), Y.flatten('F'))
118 residuals = Y.flatten('F') - predicted_Y
119 SSE = (residuals**2).sum()
120
121 residuals.shape = Y.shape
122
123 MSE = SSE / dfe
124
125 # Sum square session effect - between colums/sessions
126 SSC = ((mean(Y, 0) - mean_Y)**2).sum() * nb_subjects
127 MSC = SSC / dfc / nb_subjects
128
129 session_effect_F = MSC / MSE
130
131 # Sum Square subject effect - between rows/subjects
132 SSR = SST - SSC - SSE
133 MSR = SSR / dfr
134
135 # ICC(3,1) = (mean square subjeT - mean square error) /
136 # (mean square subjeT + (k-1)*-mean square error)
137 ICC = (MSR - MSE) / (MSR + dfc * MSE)
138
139 e_var = MSE # variance of error
140 r_var = (MSR - MSE) / nb_conditions # variance between subjects
141
142 return ICC, r_var, e_var, session_effect_F, dfc, dfe
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/algorithms/icc.py b/nipype/algorithms/icc.py
--- a/nipype/algorithms/icc.py
+++ b/nipype/algorithms/icc.py
@@ -80,7 +80,6 @@
def _list_outputs(self):
outputs = self._outputs().get()
outputs['icc_map'] = os.path.abspath('icc_map.nii')
- outputs['sessions_F_map'] = os.path.abspath('sessions_F_map.nii')
outputs['session_var_map'] = os.path.abspath('session_var_map.nii')
outputs['subject_var_map'] = os.path.abspath('subject_var_map.nii')
return outputs
| {"golden_diff": "diff --git a/nipype/algorithms/icc.py b/nipype/algorithms/icc.py\n--- a/nipype/algorithms/icc.py\n+++ b/nipype/algorithms/icc.py\n@@ -80,7 +80,6 @@\n def _list_outputs(self):\n outputs = self._outputs().get()\n outputs['icc_map'] = os.path.abspath('icc_map.nii')\n- outputs['sessions_F_map'] = os.path.abspath('sessions_F_map.nii')\n outputs['session_var_map'] = os.path.abspath('session_var_map.nii')\n outputs['subject_var_map'] = os.path.abspath('subject_var_map.nii')\n return outputs\n", "issue": "ICC: Cannot set the undefined 'sessions_F_map' attribute of a 'ICCOutputSpec' object\n### Summary\r\n\r\nTrying to run ICC raises a `TraitError` with the following description: \"TraitError: Cannot set the undefined 'sessions_F_map' attribute of a 'ICCOutputSpec' object.\"\r\nI may be doing things wrong, in which case a more informative error message would be welcome. Please look at the code snippet under _Script/Workflow Details_.\r\n\r\n_Note:_ I do get maps (icc_map.nii, session_var_map.nii, subject_var_map.nii) as an output in my `os.getcwd()`.\r\n\r\n### Actual behavior\r\nICC triggers an exception.\r\n\r\n### Expected behavior\r\nICC runs smoothly.\r\n\r\n### How to replicate the behavior\r\n```bash\r\nmkdir /Users/user/test\r\nmv mask.nii.gz sub-{01,02}_ses-{01,02}.nii.gz /Users/user/test\r\n# Where these niftis are sensible \"ICC compatible\" files\r\n# And now run code pasted below\r\n```\r\n### Script/Workflow details\r\n\r\nPlease put URL to code or code here (if not too long).\r\n```python\r\nimport os.path\r\nfrom nipype.algorithms import icc\r\n\r\nproject_dir = '/Users/user/test/'\r\n\r\ndef fname(sub, ses):\r\n return os.path.join(project_dir, f'sub-{sub}_ses-{ses}.nii.gz')\r\n\r\nlst = [[fname(1, 1), fname(1, 2)], \r\n [fname(2, 1), fname(2, 2)]]\r\nmask = os.path.join(project_dir, 'mask.nii.gz')\r\n\r\nx = icc.ICC(subjects_sessions=lst, mask=mask)\r\nx.run()\r\n```\r\n\r\n### Platform details:\r\n\r\nPlease paste the output of: `python -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\"`\r\n\r\n```python\r\n{'commit_hash': '%h',\r\n 'commit_source': 'archive substitution',\r\n 'networkx_version': '2.0',\r\n 'nibabel_version': '2.2.1',\r\n 'nipype_version': '1.0.0',\r\n 'numpy_version': '1.13.3',\r\n 'pkg_path': '/Users/user/anaconda3/lib/python3.6/site-packages/nipype',\r\n 'scipy_version': '1.0.0',\r\n 'sys_executable': '/Users/user/anaconda3/bin/python',\r\n 'sys_platform': 'darwin',\r\n 'sys_version': '3.6.4 | packaged by conda-forge | (default, Dec 23 2017, 16:54:01) \\n[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)]',\r\n 'traits_version': '4.6.0'}\r\n\r\n```\r\n\r\n### Execution environment\r\n\r\nChoose one\r\n- My python environment outside container\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import range\nimport os\nimport numpy as np\nfrom numpy import ones, kron, mean, eye, hstack, dot, tile\nimport nibabel as nb\nfrom scipy.linalg import pinv\nfrom ..interfaces.base import BaseInterfaceInputSpec, TraitedSpec, \\\n BaseInterface, traits, File\nfrom ..utils import NUMPY_MMAP\n\n\nclass ICCInputSpec(BaseInterfaceInputSpec):\n subjects_sessions = traits.List(\n traits.List(File(exists=True)),\n desc=\"n subjects m sessions 3D stat files\",\n mandatory=True)\n mask = File(exists=True, mandatory=True)\n\n\nclass ICCOutputSpec(TraitedSpec):\n icc_map = File(exists=True)\n session_var_map = File(exists=True, desc=\"variance between sessions\")\n subject_var_map = File(exists=True, desc=\"variance between subjects\")\n\n\nclass ICC(BaseInterface):\n '''\n Calculates Interclass Correlation Coefficient (3,1) as defined in\n P. E. Shrout & Joseph L. Fleiss (1979). \"Intraclass Correlations: Uses in\n Assessing Rater Reliability\". Psychological Bulletin 86 (2): 420-428. This\n particular implementation is aimed at relaibility (test-retest) studies.\n '''\n input_spec = ICCInputSpec\n output_spec = ICCOutputSpec\n\n def _run_interface(self, runtime):\n maskdata = nb.load(self.inputs.mask).get_data()\n maskdata = np.logical_not(\n np.logical_or(maskdata == 0, np.isnan(maskdata)))\n\n session_datas = [[\n nb.load(fname, mmap=NUMPY_MMAP).get_data()[maskdata].reshape(\n -1, 1) for fname in sessions\n ] for sessions in self.inputs.subjects_sessions]\n list_of_sessions = [\n np.dstack(session_data) for session_data in session_datas\n ]\n all_data = np.hstack(list_of_sessions)\n icc = np.zeros(session_datas[0][0].shape)\n session_F = np.zeros(session_datas[0][0].shape)\n session_var = np.zeros(session_datas[0][0].shape)\n subject_var = np.zeros(session_datas[0][0].shape)\n\n for x in range(icc.shape[0]):\n Y = all_data[x, :, :]\n icc[x], subject_var[x], session_var[x], session_F[\n x], _, _ = ICC_rep_anova(Y)\n\n nim = nb.load(self.inputs.subjects_sessions[0][0])\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = icc.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'icc_map.nii')\n\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = session_var.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'session_var_map.nii')\n\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = subject_var.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'subject_var_map.nii')\n\n return runtime\n\n def _list_outputs(self):\n outputs = self._outputs().get()\n outputs['icc_map'] = os.path.abspath('icc_map.nii')\n outputs['sessions_F_map'] = os.path.abspath('sessions_F_map.nii')\n outputs['session_var_map'] = os.path.abspath('session_var_map.nii')\n outputs['subject_var_map'] = os.path.abspath('subject_var_map.nii')\n return outputs\n\n\ndef ICC_rep_anova(Y):\n '''\n the data Y are entered as a 'table' ie subjects are in rows and repeated\n measures in columns\n\n One Sample Repeated measure ANOVA\n\n Y = XB + E with X = [FaTor / Subjects]\n '''\n\n [nb_subjects, nb_conditions] = Y.shape\n dfc = nb_conditions - 1\n dfe = (nb_subjects - 1) * dfc\n dfr = nb_subjects - 1\n\n # Compute the repeated measure effect\n # ------------------------------------\n\n # Sum Square Total\n mean_Y = mean(Y)\n SST = ((Y - mean_Y)**2).sum()\n\n # create the design matrix for the different levels\n x = kron(eye(nb_conditions), ones((nb_subjects, 1))) # sessions\n x0 = tile(eye(nb_subjects), (nb_conditions, 1)) # subjects\n X = hstack([x, x0])\n\n # Sum Square Error\n predicted_Y = dot(dot(dot(X, pinv(dot(X.T, X))), X.T), Y.flatten('F'))\n residuals = Y.flatten('F') - predicted_Y\n SSE = (residuals**2).sum()\n\n residuals.shape = Y.shape\n\n MSE = SSE / dfe\n\n # Sum square session effect - between colums/sessions\n SSC = ((mean(Y, 0) - mean_Y)**2).sum() * nb_subjects\n MSC = SSC / dfc / nb_subjects\n\n session_effect_F = MSC / MSE\n\n # Sum Square subject effect - between rows/subjects\n SSR = SST - SSC - SSE\n MSR = SSR / dfr\n\n # ICC(3,1) = (mean square subjeT - mean square error) /\n # (mean square subjeT + (k-1)*-mean square error)\n ICC = (MSR - MSE) / (MSR + dfc * MSE)\n\n e_var = MSE # variance of error\n r_var = (MSR - MSE) / nb_conditions # variance between subjects\n\n return ICC, r_var, e_var, session_effect_F, dfc, dfe\n", "path": "nipype/algorithms/icc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\nfrom builtins import range\nimport os\nimport numpy as np\nfrom numpy import ones, kron, mean, eye, hstack, dot, tile\nimport nibabel as nb\nfrom scipy.linalg import pinv\nfrom ..interfaces.base import BaseInterfaceInputSpec, TraitedSpec, \\\n BaseInterface, traits, File\nfrom ..utils import NUMPY_MMAP\n\n\nclass ICCInputSpec(BaseInterfaceInputSpec):\n subjects_sessions = traits.List(\n traits.List(File(exists=True)),\n desc=\"n subjects m sessions 3D stat files\",\n mandatory=True)\n mask = File(exists=True, mandatory=True)\n\n\nclass ICCOutputSpec(TraitedSpec):\n icc_map = File(exists=True)\n session_var_map = File(exists=True, desc=\"variance between sessions\")\n subject_var_map = File(exists=True, desc=\"variance between subjects\")\n\n\nclass ICC(BaseInterface):\n '''\n Calculates Interclass Correlation Coefficient (3,1) as defined in\n P. E. Shrout & Joseph L. Fleiss (1979). \"Intraclass Correlations: Uses in\n Assessing Rater Reliability\". Psychological Bulletin 86 (2): 420-428. This\n particular implementation is aimed at relaibility (test-retest) studies.\n '''\n input_spec = ICCInputSpec\n output_spec = ICCOutputSpec\n\n def _run_interface(self, runtime):\n maskdata = nb.load(self.inputs.mask).get_data()\n maskdata = np.logical_not(\n np.logical_or(maskdata == 0, np.isnan(maskdata)))\n\n session_datas = [[\n nb.load(fname, mmap=NUMPY_MMAP).get_data()[maskdata].reshape(\n -1, 1) for fname in sessions\n ] for sessions in self.inputs.subjects_sessions]\n list_of_sessions = [\n np.dstack(session_data) for session_data in session_datas\n ]\n all_data = np.hstack(list_of_sessions)\n icc = np.zeros(session_datas[0][0].shape)\n session_F = np.zeros(session_datas[0][0].shape)\n session_var = np.zeros(session_datas[0][0].shape)\n subject_var = np.zeros(session_datas[0][0].shape)\n\n for x in range(icc.shape[0]):\n Y = all_data[x, :, :]\n icc[x], subject_var[x], session_var[x], session_F[\n x], _, _ = ICC_rep_anova(Y)\n\n nim = nb.load(self.inputs.subjects_sessions[0][0])\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = icc.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'icc_map.nii')\n\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = session_var.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'session_var_map.nii')\n\n new_data = np.zeros(nim.shape)\n new_data[maskdata] = subject_var.reshape(-1, )\n new_img = nb.Nifti1Image(new_data, nim.affine, nim.header)\n nb.save(new_img, 'subject_var_map.nii')\n\n return runtime\n\n def _list_outputs(self):\n outputs = self._outputs().get()\n outputs['icc_map'] = os.path.abspath('icc_map.nii')\n outputs['session_var_map'] = os.path.abspath('session_var_map.nii')\n outputs['subject_var_map'] = os.path.abspath('subject_var_map.nii')\n return outputs\n\n\ndef ICC_rep_anova(Y):\n '''\n the data Y are entered as a 'table' ie subjects are in rows and repeated\n measures in columns\n\n One Sample Repeated measure ANOVA\n\n Y = XB + E with X = [FaTor / Subjects]\n '''\n\n [nb_subjects, nb_conditions] = Y.shape\n dfc = nb_conditions - 1\n dfe = (nb_subjects - 1) * dfc\n dfr = nb_subjects - 1\n\n # Compute the repeated measure effect\n # ------------------------------------\n\n # Sum Square Total\n mean_Y = mean(Y)\n SST = ((Y - mean_Y)**2).sum()\n\n # create the design matrix for the different levels\n x = kron(eye(nb_conditions), ones((nb_subjects, 1))) # sessions\n x0 = tile(eye(nb_subjects), (nb_conditions, 1)) # subjects\n X = hstack([x, x0])\n\n # Sum Square Error\n predicted_Y = dot(dot(dot(X, pinv(dot(X.T, X))), X.T), Y.flatten('F'))\n residuals = Y.flatten('F') - predicted_Y\n SSE = (residuals**2).sum()\n\n residuals.shape = Y.shape\n\n MSE = SSE / dfe\n\n # Sum square session effect - between colums/sessions\n SSC = ((mean(Y, 0) - mean_Y)**2).sum() * nb_subjects\n MSC = SSC / dfc / nb_subjects\n\n session_effect_F = MSC / MSE\n\n # Sum Square subject effect - between rows/subjects\n SSR = SST - SSC - SSE\n MSR = SSR / dfr\n\n # ICC(3,1) = (mean square subjeT - mean square error) /\n # (mean square subjeT + (k-1)*-mean square error)\n ICC = (MSR - MSE) / (MSR + dfc * MSE)\n\n e_var = MSE # variance of error\n r_var = (MSR - MSE) / nb_conditions # variance between subjects\n\n return ICC, r_var, e_var, session_effect_F, dfc, dfe\n", "path": "nipype/algorithms/icc.py"}]} | 2,538 | 143 |
gh_patches_debug_13547 | rasdani/github-patches | git_diff | kartoza__prj.app-263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Disqus functionality is currently broken
There should be disqus inline chat widgets on each version page and each entry page. Currently these are not working - can we work to fix it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/core/settings/project.py`
Content:
```
1 # coding=utf-8
2
3 """Project level settings.
4
5 Adjust these values as needed but don't commit passwords etc. to any public
6 repository!
7 """
8
9 import os # noqa
10 from django.utils.translation import ugettext_lazy as _
11 from .utils import absolute_path
12 from .contrib import * # noqa
13
14 # Project apps
15 INSTALLED_APPS += (
16 'base',
17 'changes',
18 'github_issue',
19 'vota',
20 'disqus',
21 )
22
23 # Due to profile page does not available, this will redirect to home page after login
24 LOGIN_REDIRECT_URL = '/'
25
26 # How many versions to list in each project box
27 PROJECT_VERSION_LIST_SIZE = 10
28
29 # Set debug to false for production
30 DEBUG = TEMPLATE_DEBUG = False
31
32 SOUTH_TESTS_MIGRATE = False
33
34
35 # Set languages which want to be translated
36 LANGUAGES = (
37 ('en', _('English')),
38 ('af', _('Afrikaans')),
39 ('id', _('Indonesian')),
40 ('ko', _('Korean')),
41 )
42
43 # Set storage path for the translation files
44 LOCALE_PATHS = (absolute_path('locale'),)
45
46
47 MIDDLEWARE_CLASSES = (
48 # For nav bar generation
49 'core.custom_middleware.NavContextMiddleware',
50 ) + MIDDLEWARE_CLASSES
51
52 # Project specific javascript files to be pipelined
53 # For third party libs like jquery should go in contrib.py
54 PIPELINE_JS['project'] = {
55 'source_filenames': (
56 'js/csrf-ajax.js',
57 'js/changelog.js',
58 'js/github-issue.js'
59 ),
60 'output_filename': 'js/project.js',
61 }
62
63 # Project specific css files to be pipelined
64 # For third party libs like bootstrap should go in contrib.py
65 PIPELINE_CSS['project'] = {
66 'source_filenames': (
67 'css/changelog.css',
68 ),
69 'output_filename': 'css/project.css',
70 'extra_context': {
71 'media': 'screen, projection',
72 },
73 }
74
```
Path: `django_project/core/settings/contrib.py`
Content:
```
1 # coding=utf-8
2 """
3 core.settings.contrib
4 """
5 from .base import * # noqa
6
7 # Extra installed apps - grapelli needs to be added before others
8 INSTALLED_APPS = (
9 'grappelli',
10 ) + INSTALLED_APPS
11
12 INSTALLED_APPS += (
13 'raven.contrib.django.raven_compat', # enable Raven plugin
14 'crispy_forms',
15 'widget_tweaks', # lets us add some bootstrap css to form elements
16 'easy_thumbnails',
17 'reversion',
18 'rosetta',
19 'embed_video',
20 'django_hashedfilenamestorage',
21 'django_countries', # for sponsor addresses
22 # 'user_map',
23 )
24
25
26 MIGRATION_MODULES = {'accounts': 'core.migration'}
27
28 GRAPPELLI_ADMIN_TITLE = 'Site administration panel'
29
30 STOP_WORDS = (
31 'a', 'an', 'and', 'if', 'is', 'the', 'in', 'i', 'you', 'other',
32 'this', 'that'
33 )
34
35 CRISPY_TEMPLATE_PACK = 'bootstrap3'
36
37 # Easy-thumbnails options
38 THUMBNAIL_SUBDIR = 'thumbnails'
39 THUMBNAIL_ALIASES = {
40 '': {
41 'entry': {'size': (50, 50), 'crop': True},
42 'medium-entry': {'size': (100, 100), 'crop': True},
43 'large-entry': {'size': (400, 300), 'crop': True},
44 'thumb300x200': {'size': (300, 200), 'crop': True},
45 },
46 }
47
48 # Pipeline related settings
49
50 INSTALLED_APPS += (
51 'pipeline',)
52
53 MIDDLEWARE_CLASSES += (
54 # For rosetta localisation
55 'django.middleware.locale.LocaleMiddleware',
56 )
57
58 DEFAULT_FILE_STORAGE = (
59 'django_hashedfilenamestorage.storage.HashedFilenameFileSystemStorage')
60
61 # use underscore template function
62 PIPELINE_TEMPLATE_FUNC = '_.template'
63
64 # enable cached storage - requires uglify.js (node.js)
65 STATICFILES_STORAGE = 'pipeline.storage.PipelineCachedStorage'
66
67 # Contributed / third party js libs for pipeline compression
68 # For hand rolled js for this app, use project.py
69 PIPELINE_JS = {}
70
71 # Contributed / third party css for pipeline compression
72 # For hand rolled css for this app, use project.py
73 PIPELINE_CSS = {}
74
75 # These get enabled in prod.py
76 PIPELINE_ENABLED = False
77 PIPELINE_CSS_COMPRESSOR = None
78 PIPELINE_JS_COMPRESSOR = None
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/core/settings/contrib.py b/django_project/core/settings/contrib.py
--- a/django_project/core/settings/contrib.py
+++ b/django_project/core/settings/contrib.py
@@ -20,8 +20,12 @@
'django_hashedfilenamestorage',
'django_countries', # for sponsor addresses
# 'user_map',
+ 'disqus',
)
+# Set disqus and shortname
+# noinspection PyUnresolvedReferences
+from .secret import DISQUS_WEBSITE_SHORTNAME # noqa
MIGRATION_MODULES = {'accounts': 'core.migration'}
diff --git a/django_project/core/settings/project.py b/django_project/core/settings/project.py
--- a/django_project/core/settings/project.py
+++ b/django_project/core/settings/project.py
@@ -17,7 +17,6 @@
'changes',
'github_issue',
'vota',
- 'disqus',
)
# Due to profile page does not available, this will redirect to home page after login
| {"golden_diff": "diff --git a/django_project/core/settings/contrib.py b/django_project/core/settings/contrib.py\n--- a/django_project/core/settings/contrib.py\n+++ b/django_project/core/settings/contrib.py\n@@ -20,8 +20,12 @@\n 'django_hashedfilenamestorage',\n 'django_countries', # for sponsor addresses\n # 'user_map',\n+ 'disqus',\n )\n \n+# Set disqus and shortname\n+# noinspection PyUnresolvedReferences\n+from .secret import DISQUS_WEBSITE_SHORTNAME # noqa\n \n MIGRATION_MODULES = {'accounts': 'core.migration'}\n \ndiff --git a/django_project/core/settings/project.py b/django_project/core/settings/project.py\n--- a/django_project/core/settings/project.py\n+++ b/django_project/core/settings/project.py\n@@ -17,7 +17,6 @@\n 'changes',\n 'github_issue',\n 'vota',\n- 'disqus',\n )\n \n # Due to profile page does not available, this will redirect to home page after login\n", "issue": "Disqus functionality is currently broken\nThere should be disqus inline chat widgets on each version page and each entry page. Currently these are not working - can we work to fix it.\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Project level settings.\n\nAdjust these values as needed but don't commit passwords etc. to any public\nrepository!\n\"\"\"\n\nimport os # noqa\nfrom django.utils.translation import ugettext_lazy as _\nfrom .utils import absolute_path\nfrom .contrib import * # noqa\n\n# Project apps\nINSTALLED_APPS += (\n 'base',\n 'changes',\n 'github_issue',\n 'vota',\n 'disqus',\n)\n\n# Due to profile page does not available, this will redirect to home page after login\nLOGIN_REDIRECT_URL = '/'\n\n# How many versions to list in each project box\nPROJECT_VERSION_LIST_SIZE = 10\n\n# Set debug to false for production\nDEBUG = TEMPLATE_DEBUG = False\n\nSOUTH_TESTS_MIGRATE = False\n\n\n# Set languages which want to be translated\nLANGUAGES = (\n ('en', _('English')),\n ('af', _('Afrikaans')),\n ('id', _('Indonesian')),\n ('ko', _('Korean')),\n)\n\n# Set storage path for the translation files\nLOCALE_PATHS = (absolute_path('locale'),)\n\n\nMIDDLEWARE_CLASSES = (\n # For nav bar generation\n 'core.custom_middleware.NavContextMiddleware',\n) + MIDDLEWARE_CLASSES\n\n# Project specific javascript files to be pipelined\n# For third party libs like jquery should go in contrib.py\nPIPELINE_JS['project'] = {\n 'source_filenames': (\n 'js/csrf-ajax.js',\n 'js/changelog.js',\n 'js/github-issue.js'\n ),\n 'output_filename': 'js/project.js',\n}\n\n# Project specific css files to be pipelined\n# For third party libs like bootstrap should go in contrib.py\nPIPELINE_CSS['project'] = {\n 'source_filenames': (\n 'css/changelog.css',\n ),\n 'output_filename': 'css/project.css',\n 'extra_context': {\n 'media': 'screen, projection',\n },\n}\n", "path": "django_project/core/settings/project.py"}, {"content": "# coding=utf-8\n\"\"\"\ncore.settings.contrib\n\"\"\"\nfrom .base import * # noqa\n\n# Extra installed apps - grapelli needs to be added before others\nINSTALLED_APPS = (\n 'grappelli',\n) + INSTALLED_APPS\n\nINSTALLED_APPS += (\n 'raven.contrib.django.raven_compat', # enable Raven plugin\n 'crispy_forms',\n 'widget_tweaks', # lets us add some bootstrap css to form elements\n 'easy_thumbnails',\n 'reversion',\n 'rosetta',\n 'embed_video',\n 'django_hashedfilenamestorage',\n 'django_countries', # for sponsor addresses\n # 'user_map',\n)\n\n\nMIGRATION_MODULES = {'accounts': 'core.migration'}\n\nGRAPPELLI_ADMIN_TITLE = 'Site administration panel'\n\nSTOP_WORDS = (\n 'a', 'an', 'and', 'if', 'is', 'the', 'in', 'i', 'you', 'other',\n 'this', 'that'\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\n# Easy-thumbnails options\nTHUMBNAIL_SUBDIR = 'thumbnails'\nTHUMBNAIL_ALIASES = {\n '': {\n 'entry': {'size': (50, 50), 'crop': True},\n 'medium-entry': {'size': (100, 100), 'crop': True},\n 'large-entry': {'size': (400, 300), 'crop': True},\n 'thumb300x200': {'size': (300, 200), 'crop': True},\n },\n}\n\n# Pipeline related settings\n\nINSTALLED_APPS += (\n 'pipeline',)\n\nMIDDLEWARE_CLASSES += (\n # For rosetta localisation\n 'django.middleware.locale.LocaleMiddleware',\n)\n\nDEFAULT_FILE_STORAGE = (\n 'django_hashedfilenamestorage.storage.HashedFilenameFileSystemStorage')\n\n# use underscore template function\nPIPELINE_TEMPLATE_FUNC = '_.template'\n\n# enable cached storage - requires uglify.js (node.js)\nSTATICFILES_STORAGE = 'pipeline.storage.PipelineCachedStorage'\n\n# Contributed / third party js libs for pipeline compression\n# For hand rolled js for this app, use project.py\nPIPELINE_JS = {}\n\n# Contributed / third party css for pipeline compression\n# For hand rolled css for this app, use project.py\nPIPELINE_CSS = {}\n\n# These get enabled in prod.py\nPIPELINE_ENABLED = False\nPIPELINE_CSS_COMPRESSOR = None\nPIPELINE_JS_COMPRESSOR = None\n", "path": "django_project/core/settings/contrib.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Project level settings.\n\nAdjust these values as needed but don't commit passwords etc. to any public\nrepository!\n\"\"\"\n\nimport os # noqa\nfrom django.utils.translation import ugettext_lazy as _\nfrom .utils import absolute_path\nfrom .contrib import * # noqa\n\n# Project apps\nINSTALLED_APPS += (\n 'base',\n 'changes',\n 'github_issue',\n 'vota',\n)\n\n# Due to profile page does not available, this will redirect to home page after login\nLOGIN_REDIRECT_URL = '/'\n\n# How many versions to list in each project box\nPROJECT_VERSION_LIST_SIZE = 10\n\n# Set debug to false for production\nDEBUG = TEMPLATE_DEBUG = False\n\nSOUTH_TESTS_MIGRATE = False\n\n\n# Set languages which want to be translated\nLANGUAGES = (\n ('en', _('English')),\n ('af', _('Afrikaans')),\n ('id', _('Indonesian')),\n ('ko', _('Korean')),\n)\n\n# Set storage path for the translation files\nLOCALE_PATHS = (absolute_path('locale'),)\n\n\nMIDDLEWARE_CLASSES = (\n # For nav bar generation\n 'core.custom_middleware.NavContextMiddleware',\n) + MIDDLEWARE_CLASSES\n\n# Project specific javascript files to be pipelined\n# For third party libs like jquery should go in contrib.py\nPIPELINE_JS['project'] = {\n 'source_filenames': (\n 'js/csrf-ajax.js',\n 'js/changelog.js',\n 'js/github-issue.js'\n ),\n 'output_filename': 'js/project.js',\n}\n\n# Project specific css files to be pipelined\n# For third party libs like bootstrap should go in contrib.py\nPIPELINE_CSS['project'] = {\n 'source_filenames': (\n 'css/changelog.css',\n ),\n 'output_filename': 'css/project.css',\n 'extra_context': {\n 'media': 'screen, projection',\n },\n}\n", "path": "django_project/core/settings/project.py"}, {"content": "# coding=utf-8\n\"\"\"\ncore.settings.contrib\n\"\"\"\nfrom .base import * # noqa\n\n# Extra installed apps - grapelli needs to be added before others\nINSTALLED_APPS = (\n 'grappelli',\n) + INSTALLED_APPS\n\nINSTALLED_APPS += (\n 'raven.contrib.django.raven_compat', # enable Raven plugin\n 'crispy_forms',\n 'widget_tweaks', # lets us add some bootstrap css to form elements\n 'easy_thumbnails',\n 'reversion',\n 'rosetta',\n 'embed_video',\n 'django_hashedfilenamestorage',\n 'django_countries', # for sponsor addresses\n # 'user_map',\n 'disqus',\n)\n\n# Set disqus and shortname\n# noinspection PyUnresolvedReferences\nfrom .secret import DISQUS_WEBSITE_SHORTNAME # noqa\n\nMIGRATION_MODULES = {'accounts': 'core.migration'}\n\nGRAPPELLI_ADMIN_TITLE = 'Site administration panel'\n\nSTOP_WORDS = (\n 'a', 'an', 'and', 'if', 'is', 'the', 'in', 'i', 'you', 'other',\n 'this', 'that'\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\n# Easy-thumbnails options\nTHUMBNAIL_SUBDIR = 'thumbnails'\nTHUMBNAIL_ALIASES = {\n '': {\n 'entry': {'size': (50, 50), 'crop': True},\n 'medium-entry': {'size': (100, 100), 'crop': True},\n 'large-entry': {'size': (400, 300), 'crop': True},\n 'thumb300x200': {'size': (300, 200), 'crop': True},\n },\n}\n\n# Pipeline related settings\n\nINSTALLED_APPS += (\n 'pipeline',)\n\nMIDDLEWARE_CLASSES += (\n # For rosetta localisation\n 'django.middleware.locale.LocaleMiddleware',\n)\n\nDEFAULT_FILE_STORAGE = (\n 'django_hashedfilenamestorage.storage.HashedFilenameFileSystemStorage')\n\n# use underscore template function\nPIPELINE_TEMPLATE_FUNC = '_.template'\n\n# enable cached storage - requires uglify.js (node.js)\nSTATICFILES_STORAGE = 'pipeline.storage.PipelineCachedStorage'\n\n# Contributed / third party js libs for pipeline compression\n# For hand rolled js for this app, use project.py\nPIPELINE_JS = {}\n\n# Contributed / third party css for pipeline compression\n# For hand rolled css for this app, use project.py\nPIPELINE_CSS = {}\n\n# These get enabled in prod.py\nPIPELINE_ENABLED = False\nPIPELINE_CSS_COMPRESSOR = None\nPIPELINE_JS_COMPRESSOR = None\n", "path": "django_project/core/settings/contrib.py"}]} | 1,584 | 228 |
gh_patches_debug_20418 | rasdani/github-patches | git_diff | nonebot__nonebot2-238 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: 内置的single_session插件有一些bug
**描述问题:**
内置的`single_session`插件只能处理有`get_session_id`方法的`event`,如果一个`matcher`监听了`metaevent`,那么其中的`run_preprocessor`会报错
**如何复现?**
[这一行](https://github.com/nonebot/nonebot2/blob/93ffc93a80cf9e3103eb4a164e7b32ab3cdd0882/nonebot/plugins/single_session.py#L13)限制了只能监听有`get_session_id`的事件,但是对没有这个方法的事件没有做额外的处理,导致报错。
除此之外,下面的[判断语句](https://github.com/nonebot/nonebot2/blob/93ffc93a80cf9e3103eb4a164e7b32ab3cdd0882/nonebot/plugins/single_session.py#L16)也有问题,如果这个事件第一次遇到的话不应该被忽略
**期望的结果**
插件正常使用
````
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/plugins/single_session.py`
Content:
```
1 from typing import Dict, Optional
2
3 from nonebot.typing import T_State
4 from nonebot.matcher import Matcher
5 from nonebot.adapters import Bot, Event
6 from nonebot.message import run_preprocessor, run_postprocessor, IgnoredException
7
8 _running_matcher: Dict[str, int] = {}
9
10
11 @run_preprocessor
12 async def _(matcher: Matcher, bot: Bot, event: Event, state: T_State):
13 session_id = event.get_session_id()
14 event_id = id(event)
15
16 if _running_matcher.get(session_id, None) != event_id:
17 raise IgnoredException("Annother matcher running")
18
19 _running_matcher[session_id] = event_id
20
21
22 @run_postprocessor
23 async def _(matcher: Matcher, exception: Optional[Exception], bot: Bot, event: Event, state: T_State):
24 session_id = event.get_session_id()
25 if session_id in _running_matcher:
26 del _running_matcher[session_id]
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nonebot/plugins/single_session.py b/nonebot/plugins/single_session.py
--- a/nonebot/plugins/single_session.py
+++ b/nonebot/plugins/single_session.py
@@ -10,17 +10,23 @@
@run_preprocessor
async def _(matcher: Matcher, bot: Bot, event: Event, state: T_State):
- session_id = event.get_session_id()
- event_id = id(event)
-
- if _running_matcher.get(session_id, None) != event_id:
+ try:
+ session_id = event.get_session_id()
+ except Exception:
+ return
+ current_event_id = id(event)
+ event_id = _running_matcher.get(session_id, None)
+ if event_id and event_id != current_event_id:
raise IgnoredException("Annother matcher running")
- _running_matcher[session_id] = event_id
+ _running_matcher[session_id] = current_event_id
@run_postprocessor
async def _(matcher: Matcher, exception: Optional[Exception], bot: Bot, event: Event, state: T_State):
- session_id = event.get_session_id()
+ try:
+ session_id = event.get_session_id()
+ except Exception:
+ return
if session_id in _running_matcher:
del _running_matcher[session_id]
| {"golden_diff": "diff --git a/nonebot/plugins/single_session.py b/nonebot/plugins/single_session.py\n--- a/nonebot/plugins/single_session.py\n+++ b/nonebot/plugins/single_session.py\n@@ -10,17 +10,23 @@\n \n @run_preprocessor\n async def _(matcher: Matcher, bot: Bot, event: Event, state: T_State):\n- session_id = event.get_session_id()\n- event_id = id(event)\n-\n- if _running_matcher.get(session_id, None) != event_id:\n+ try:\n+ session_id = event.get_session_id()\n+ except Exception:\n+ return\n+ current_event_id = id(event)\n+ event_id = _running_matcher.get(session_id, None)\n+ if event_id and event_id != current_event_id:\n raise IgnoredException(\"Annother matcher running\")\n \n- _running_matcher[session_id] = event_id\n+ _running_matcher[session_id] = current_event_id\n \n \n @run_postprocessor\n async def _(matcher: Matcher, exception: Optional[Exception], bot: Bot, event: Event, state: T_State):\n- session_id = event.get_session_id()\n+ try:\n+ session_id = event.get_session_id()\n+ except Exception:\n+ return\n if session_id in _running_matcher:\n del _running_matcher[session_id]\n", "issue": "Bug: \u5185\u7f6e\u7684single_session\u63d2\u4ef6\u6709\u4e00\u4e9bbug\n**\u63cf\u8ff0\u95ee\u9898\uff1a**\r\n\r\n\u5185\u7f6e\u7684`single_session`\u63d2\u4ef6\u53ea\u80fd\u5904\u7406\u6709`get_session_id`\u65b9\u6cd5\u7684`event`\uff0c\u5982\u679c\u4e00\u4e2a`matcher`\u76d1\u542c\u4e86`metaevent`\uff0c\u90a3\u4e48\u5176\u4e2d\u7684`run_preprocessor`\u4f1a\u62a5\u9519\r\n\r\n**\u5982\u4f55\u590d\u73b0\uff1f**\r\n\r\n[\u8fd9\u4e00\u884c](https://github.com/nonebot/nonebot2/blob/93ffc93a80cf9e3103eb4a164e7b32ab3cdd0882/nonebot/plugins/single_session.py#L13)\u9650\u5236\u4e86\u53ea\u80fd\u76d1\u542c\u6709`get_session_id`\u7684\u4e8b\u4ef6\uff0c\u4f46\u662f\u5bf9\u6ca1\u6709\u8fd9\u4e2a\u65b9\u6cd5\u7684\u4e8b\u4ef6\u6ca1\u6709\u505a\u989d\u5916\u7684\u5904\u7406\uff0c\u5bfc\u81f4\u62a5\u9519\u3002\r\n\u9664\u6b64\u4e4b\u5916\uff0c\u4e0b\u9762\u7684[\u5224\u65ad\u8bed\u53e5](https://github.com/nonebot/nonebot2/blob/93ffc93a80cf9e3103eb4a164e7b32ab3cdd0882/nonebot/plugins/single_session.py#L16)\u4e5f\u6709\u95ee\u9898\uff0c\u5982\u679c\u8fd9\u4e2a\u4e8b\u4ef6\u7b2c\u4e00\u6b21\u9047\u5230\u7684\u8bdd\u4e0d\u5e94\u8be5\u88ab\u5ffd\u7565\r\n\r\n**\u671f\u671b\u7684\u7ed3\u679c**\r\n\u63d2\u4ef6\u6b63\u5e38\u4f7f\u7528\r\n\r\n````\r\n\n", "before_files": [{"content": "from typing import Dict, Optional\n\nfrom nonebot.typing import T_State\nfrom nonebot.matcher import Matcher\nfrom nonebot.adapters import Bot, Event\nfrom nonebot.message import run_preprocessor, run_postprocessor, IgnoredException\n\n_running_matcher: Dict[str, int] = {}\n\n\n@run_preprocessor\nasync def _(matcher: Matcher, bot: Bot, event: Event, state: T_State):\n session_id = event.get_session_id()\n event_id = id(event)\n\n if _running_matcher.get(session_id, None) != event_id:\n raise IgnoredException(\"Annother matcher running\")\n\n _running_matcher[session_id] = event_id\n\n\n@run_postprocessor\nasync def _(matcher: Matcher, exception: Optional[Exception], bot: Bot, event: Event, state: T_State):\n session_id = event.get_session_id()\n if session_id in _running_matcher:\n del _running_matcher[session_id]\n", "path": "nonebot/plugins/single_session.py"}], "after_files": [{"content": "from typing import Dict, Optional\n\nfrom nonebot.typing import T_State\nfrom nonebot.matcher import Matcher\nfrom nonebot.adapters import Bot, Event\nfrom nonebot.message import run_preprocessor, run_postprocessor, IgnoredException\n\n_running_matcher: Dict[str, int] = {}\n\n\n@run_preprocessor\nasync def _(matcher: Matcher, bot: Bot, event: Event, state: T_State):\n try:\n session_id = event.get_session_id()\n except Exception:\n return\n current_event_id = id(event)\n event_id = _running_matcher.get(session_id, None)\n if event_id and event_id != current_event_id:\n raise IgnoredException(\"Annother matcher running\")\n\n _running_matcher[session_id] = current_event_id\n\n\n@run_postprocessor\nasync def _(matcher: Matcher, exception: Optional[Exception], bot: Bot, event: Event, state: T_State):\n try:\n session_id = event.get_session_id()\n except Exception:\n return\n if session_id in _running_matcher:\n del _running_matcher[session_id]\n", "path": "nonebot/plugins/single_session.py"}]} | 773 | 302 |
gh_patches_debug_42013 | rasdani/github-patches | git_diff | fonttools__fonttools-2762 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TTFont.getGlyphSet() returning _TTVarGlyphSet where type hints/typechecker expect _TTGlyphSet
the new `_TTVarGlyphSet` (#2738) that is returned from `TTFont.getGlyphSet()` is not a subclass of the old `_TTGlyphSet`, and neither is the `_TTVarGlyphGlyf` that it contains a subclass of the old `_TTGlyph`.
We have some internal code that uses type hints and is typechecked using pytype, which breaks with the latest fonttools 4.36.0 following the above change.
```
bad option 'fontTools.ttLib.ttGlyphSet._TTVarGlyphGlyf' in return type [bad-return-type]
Expected: fontTools.ttLib.ttGlyphSet._TTGlyph
```
I think we should revise the class hierarchy of these glyphset/glyph objects and make sure that we continue to return a `_TTGlyphSet` that contains generic `_TTGlyph` objects and work around their respective differences inside subclasses.
I think it's doable.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Lib/fontTools/ttLib/ttGlyphSet.py`
Content:
```
1 """GlyphSets returned by a TTFont."""
2
3 from fontTools.misc.fixedTools import otRound
4 from copy import copy
5
6 class _TTGlyphSet(object):
7
8 """Generic dict-like GlyphSet class that pulls metrics from hmtx and
9 glyph shape from TrueType or CFF.
10 """
11
12 def __init__(self, ttFont, glyphs, glyphType):
13 """Construct a new glyphset.
14
15 Args:
16 font (TTFont): The font object (used to get metrics).
17 glyphs (dict): A dictionary mapping glyph names to ``_TTGlyph`` objects.
18 glyphType (class): Either ``_TTGlyphCFF`` or ``_TTGlyphGlyf``.
19 """
20 self._glyphs = glyphs
21 self._hmtx = ttFont['hmtx']
22 self._vmtx = ttFont['vmtx'] if 'vmtx' in ttFont else None
23 self._glyphType = glyphType
24
25 def keys(self):
26 return list(self._glyphs.keys())
27
28 def has_key(self, glyphName):
29 return glyphName in self._glyphs
30
31 __contains__ = has_key
32
33 def __getitem__(self, glyphName):
34 horizontalMetrics = self._hmtx[glyphName]
35 verticalMetrics = self._vmtx[glyphName] if self._vmtx else None
36 return self._glyphType(
37 self, self._glyphs[glyphName], horizontalMetrics, verticalMetrics)
38
39 def __len__(self):
40 return len(self._glyphs)
41
42 def get(self, glyphName, default=None):
43 try:
44 return self[glyphName]
45 except KeyError:
46 return default
47
48 class _TTGlyph(object):
49
50 """Wrapper for a TrueType glyph that supports the Pen protocol, meaning
51 that it has .draw() and .drawPoints() methods that take a pen object as
52 their only argument. Additionally there are 'width' and 'lsb' attributes,
53 read from the 'hmtx' table.
54
55 If the font contains a 'vmtx' table, there will also be 'height' and 'tsb'
56 attributes.
57 """
58
59 def __init__(self, glyphset, glyph, horizontalMetrics, verticalMetrics=None):
60 """Construct a new _TTGlyph.
61
62 Args:
63 glyphset (_TTGlyphSet): A glyphset object used to resolve components.
64 glyph (ttLib.tables._g_l_y_f.Glyph): The glyph object.
65 horizontalMetrics (int, int): The glyph's width and left sidebearing.
66 """
67 self._glyphset = glyphset
68 self._glyph = glyph
69 self.width, self.lsb = horizontalMetrics
70 if verticalMetrics:
71 self.height, self.tsb = verticalMetrics
72 else:
73 self.height, self.tsb = None, None
74
75 def draw(self, pen):
76 """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details
77 how that works.
78 """
79 self._glyph.draw(pen)
80
81 def drawPoints(self, pen):
82 # drawPoints is only implemented for _TTGlyphGlyf at this time.
83 raise NotImplementedError()
84
85 class _TTGlyphCFF(_TTGlyph):
86 pass
87
88 class _TTGlyphGlyf(_TTGlyph):
89
90 def draw(self, pen):
91 """Draw the glyph onto Pen. See fontTools.pens.basePen for details
92 how that works.
93 """
94 glyfTable = self._glyphset._glyphs
95 glyph = self._glyph
96 offset = self.lsb - glyph.xMin if hasattr(glyph, "xMin") else 0
97 glyph.draw(pen, glyfTable, offset)
98
99 def drawPoints(self, pen):
100 """Draw the glyph onto PointPen. See fontTools.pens.pointPen
101 for details how that works.
102 """
103 glyfTable = self._glyphset._glyphs
104 glyph = self._glyph
105 offset = self.lsb - glyph.xMin if hasattr(glyph, "xMin") else 0
106 glyph.drawPoints(pen, glyfTable, offset)
107
108
109
110 class _TTVarGlyphSet(object):
111
112 def __init__(self, font, location, normalized=False):
113 from fontTools.varLib.models import normalizeLocation, piecewiseLinearMap
114 self._ttFont = font
115 if not normalized:
116 axes = {a.axisTag: (a.minValue, a.defaultValue, a.maxValue) for a in font['fvar'].axes}
117 location = normalizeLocation(location, axes)
118 if 'avar' in font:
119 avar = font['avar']
120 avarSegments = avar.segments
121 new_location = {}
122 for axis_tag,value in location.items():
123 avarMapping = avarSegments.get(axis_tag, None)
124 if avarMapping is not None:
125 value = piecewiseLinearMap(value, avarMapping)
126 new_location[axis_tag] = value
127 location = new_location
128 del new_location
129
130 self.location = location
131
132 def keys(self):
133 return list(self._ttFont['glyf'].keys())
134
135 def has_key(self, glyphName):
136 return glyphName in self._ttFont['glyf']
137 __contains__ = has_key
138
139 def __getitem__(self, glyphName):
140 return _TTVarGlyphGlyf(self._ttFont, glyphName, self.location)
141
142 def get(self, glyphName, default=None):
143 try:
144 return self[glyphName]
145 except KeyError:
146 return default
147
148 def _setCoordinates(glyph, coord, glyfTable):
149 # Handle phantom points for (left, right, top, bottom) positions.
150 assert len(coord) >= 4
151 if not hasattr(glyph, 'xMin'):
152 glyph.recalcBounds(glyfTable)
153 leftSideX = coord[-4][0]
154 rightSideX = coord[-3][0]
155 topSideY = coord[-2][1]
156 bottomSideY = coord[-1][1]
157
158 for _ in range(4):
159 del coord[-1]
160
161 if glyph.isComposite():
162 assert len(coord) == len(glyph.components)
163 for p,comp in zip(coord, glyph.components):
164 if hasattr(comp, 'x'):
165 comp.x,comp.y = p
166 elif glyph.numberOfContours == 0:
167 assert len(coord) == 0
168 else:
169 assert len(coord) == len(glyph.coordinates)
170 glyph.coordinates = coord
171
172 glyph.recalcBounds(glyfTable)
173
174 horizontalAdvanceWidth = otRound(rightSideX - leftSideX)
175 verticalAdvanceWidth = otRound(topSideY - bottomSideY)
176 leftSideBearing = otRound(glyph.xMin - leftSideX)
177 return horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth
178
179
180 class _TTVarGlyphGlyf(object):
181
182 def __init__(self, ttFont, glyphName, location):
183 self._ttFont = ttFont
184 self._glyphName = glyphName
185 self._location = location
186 self.width = None # draw fills it in
187
188 def draw(self, pen):
189 from fontTools.varLib.iup import iup_delta
190 from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates
191 from fontTools.varLib.models import supportScalar
192
193 glyf = self._ttFont['glyf']
194 hMetrics = self._ttFont['hmtx'].metrics
195 vMetrics = getattr(self._ttFont.get('vmtx'), 'metrics', None)
196
197 variations = self._ttFont['gvar'].variations[self._glyphName]
198 coordinates, _ = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)
199 origCoords, endPts = None, None
200 for var in variations:
201 scalar = supportScalar(self._location, var.axes)
202 if not scalar:
203 continue
204 delta = var.coordinates
205 if None in delta:
206 if origCoords is None:
207 origCoords,control = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)
208 endPts = control[1] if control[0] >= 1 else list(range(len(control[1])))
209 delta = iup_delta(delta, origCoords, endPts)
210 coordinates += GlyphCoordinates(delta) * scalar
211
212 glyph = copy(glyf[self._glyphName]) # Shallow copy
213 horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth = _setCoordinates(glyph, coordinates, glyf)
214 self.width = horizontalAdvanceWidth
215 self.height = verticalAdvanceWidth
216 offset = leftSideBearing - glyph.xMin if hasattr(glyph, "xMin") else 0
217 glyph.draw(pen, glyf, offset)
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Lib/fontTools/ttLib/ttGlyphSet.py b/Lib/fontTools/ttLib/ttGlyphSet.py
--- a/Lib/fontTools/ttLib/ttGlyphSet.py
+++ b/Lib/fontTools/ttLib/ttGlyphSet.py
@@ -107,19 +107,22 @@
-class _TTVarGlyphSet(object):
+class _TTVarGlyphSet(_TTGlyphSet):
def __init__(self, font, location, normalized=False):
- from fontTools.varLib.models import normalizeLocation, piecewiseLinearMap
self._ttFont = font
+ self._glyphs = font['glyf']
+
if not normalized:
+ from fontTools.varLib.models import normalizeLocation, piecewiseLinearMap
+
axes = {a.axisTag: (a.minValue, a.defaultValue, a.maxValue) for a in font['fvar'].axes}
location = normalizeLocation(location, axes)
if 'avar' in font:
avar = font['avar']
avarSegments = avar.segments
new_location = {}
- for axis_tag,value in location.items():
+ for axis_tag, value in location.items():
avarMapping = avarSegments.get(axis_tag, None)
if avarMapping is not None:
value = piecewiseLinearMap(value, avarMapping)
@@ -129,21 +132,11 @@
self.location = location
- def keys(self):
- return list(self._ttFont['glyf'].keys())
-
- def has_key(self, glyphName):
- return glyphName in self._ttFont['glyf']
- __contains__ = has_key
-
def __getitem__(self, glyphName):
+ if glyphName not in self._glyphs:
+ raise KeyError(glyphName)
return _TTVarGlyphGlyf(self._ttFont, glyphName, self.location)
- def get(self, glyphName, default=None):
- try:
- return self[glyphName]
- except KeyError:
- return default
def _setCoordinates(glyph, coord, glyfTable):
# Handle phantom points for (left, right, top, bottom) positions.
@@ -174,16 +167,25 @@
horizontalAdvanceWidth = otRound(rightSideX - leftSideX)
verticalAdvanceWidth = otRound(topSideY - bottomSideY)
leftSideBearing = otRound(glyph.xMin - leftSideX)
- return horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth
+ topSideBearing = otRound(topSideY - glyph.yMax)
+ return (
+ horizontalAdvanceWidth,
+ leftSideBearing,
+ verticalAdvanceWidth,
+ topSideBearing,
+ )
-class _TTVarGlyphGlyf(object):
-
+class _TTVarGlyph(_TTGlyph):
def __init__(self, ttFont, glyphName, location):
self._ttFont = ttFont
self._glyphName = glyphName
self._location = location
- self.width = None # draw fills it in
+ # draw() fills these in
+ self.width = self.height = self.lsb = self.tsb = None
+
+
+class _TTVarGlyphGlyf(_TTVarGlyph):
def draw(self, pen):
from fontTools.varLib.iup import iup_delta
@@ -210,8 +212,10 @@
coordinates += GlyphCoordinates(delta) * scalar
glyph = copy(glyf[self._glyphName]) # Shallow copy
- horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth = _setCoordinates(glyph, coordinates, glyf)
- self.width = horizontalAdvanceWidth
- self.height = verticalAdvanceWidth
- offset = leftSideBearing - glyph.xMin if hasattr(glyph, "xMin") else 0
+ width, lsb, height, tsb = _setCoordinates(glyph, coordinates, glyf)
+ self.width = width
+ self.lsb = lsb
+ self.height = height
+ self.tsb = tsb
+ offset = lsb - glyph.xMin if hasattr(glyph, "xMin") else 0
glyph.draw(pen, glyf, offset)
| {"golden_diff": "diff --git a/Lib/fontTools/ttLib/ttGlyphSet.py b/Lib/fontTools/ttLib/ttGlyphSet.py\n--- a/Lib/fontTools/ttLib/ttGlyphSet.py\n+++ b/Lib/fontTools/ttLib/ttGlyphSet.py\n@@ -107,19 +107,22 @@\n \n \n \n-class _TTVarGlyphSet(object):\n+class _TTVarGlyphSet(_TTGlyphSet):\n \n \tdef __init__(self, font, location, normalized=False):\n-\t\tfrom fontTools.varLib.models import normalizeLocation, piecewiseLinearMap\n \t\tself._ttFont = font\n+\t\tself._glyphs = font['glyf']\n+\n \t\tif not normalized:\n+\t\t\tfrom fontTools.varLib.models import normalizeLocation, piecewiseLinearMap\n+\n \t\t\taxes = {a.axisTag: (a.minValue, a.defaultValue, a.maxValue) for a in font['fvar'].axes}\n \t\t\tlocation = normalizeLocation(location, axes)\n \t\t\tif 'avar' in font:\n \t\t\t\tavar = font['avar']\n \t\t\t\tavarSegments = avar.segments\n \t\t\t\tnew_location = {}\n-\t\t\t\tfor axis_tag,value in location.items():\n+\t\t\t\tfor axis_tag, value in location.items():\n \t\t\t\t\tavarMapping = avarSegments.get(axis_tag, None)\n \t\t\t\t\tif avarMapping is not None:\n \t\t\t\t\t\tvalue = piecewiseLinearMap(value, avarMapping)\n@@ -129,21 +132,11 @@\n \n \t\tself.location = location\n \n-\tdef keys(self):\n-\t\treturn list(self._ttFont['glyf'].keys())\n-\n-\tdef has_key(self, glyphName):\n-\t\treturn glyphName in self._ttFont['glyf']\n-\t__contains__ = has_key\n-\n \tdef __getitem__(self, glyphName):\n+\t\tif glyphName not in self._glyphs:\n+\t\t\traise KeyError(glyphName)\n \t\treturn _TTVarGlyphGlyf(self._ttFont, glyphName, self.location)\n \n-\tdef get(self, glyphName, default=None):\n-\t\ttry:\n-\t\t\treturn self[glyphName]\n-\t\texcept KeyError:\n-\t\t\treturn default\n \n def _setCoordinates(glyph, coord, glyfTable):\n \t# Handle phantom points for (left, right, top, bottom) positions.\n@@ -174,16 +167,25 @@\n \thorizontalAdvanceWidth = otRound(rightSideX - leftSideX)\n \tverticalAdvanceWidth = otRound(topSideY - bottomSideY)\n \tleftSideBearing = otRound(glyph.xMin - leftSideX)\n-\treturn horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth\n+\ttopSideBearing = otRound(topSideY - glyph.yMax)\n+\treturn (\n+\t\thorizontalAdvanceWidth,\n+\t\tleftSideBearing,\n+\t\tverticalAdvanceWidth,\n+\t\ttopSideBearing,\n+\t)\n \n \n-class _TTVarGlyphGlyf(object):\n-\n+class _TTVarGlyph(_TTGlyph):\n \tdef __init__(self, ttFont, glyphName, location):\n \t\tself._ttFont = ttFont\n \t\tself._glyphName = glyphName\n \t\tself._location = location\n-\t\tself.width = None # draw fills it in\n+\t\t# draw() fills these in\n+\t\tself.width = self.height = self.lsb = self.tsb = None\n+\n+\n+class _TTVarGlyphGlyf(_TTVarGlyph):\n \n \tdef draw(self, pen):\n \t\tfrom fontTools.varLib.iup import iup_delta\n@@ -210,8 +212,10 @@\n \t\t\tcoordinates += GlyphCoordinates(delta) * scalar\n \n \t\tglyph = copy(glyf[self._glyphName]) # Shallow copy\n-\t\thorizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth = _setCoordinates(glyph, coordinates, glyf)\n-\t\tself.width = horizontalAdvanceWidth\n-\t\tself.height = verticalAdvanceWidth\n-\t\toffset = leftSideBearing - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n+\t\twidth, lsb, height, tsb = _setCoordinates(glyph, coordinates, glyf)\n+\t\tself.width = width\n+\t\tself.lsb = lsb\n+\t\tself.height = height\n+\t\tself.tsb = tsb\n+\t\toffset = lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n \t\tglyph.draw(pen, glyf, offset)\n", "issue": "TTFont.getGlyphSet() returning _TTVarGlyphSet where type hints/typechecker expect _TTGlyphSet\nthe new `_TTVarGlyphSet` (#2738) that is returned from `TTFont.getGlyphSet()` is not a subclass of the old `_TTGlyphSet`, and neither is the `_TTVarGlyphGlyf` that it contains a subclass of the old `_TTGlyph`.\r\nWe have some internal code that uses type hints and is typechecked using pytype, which breaks with the latest fonttools 4.36.0 following the above change.\r\n\r\n```\r\nbad option 'fontTools.ttLib.ttGlyphSet._TTVarGlyphGlyf' in return type [bad-return-type]\r\n Expected: fontTools.ttLib.ttGlyphSet._TTGlyph\r\n```\r\n\r\nI think we should revise the class hierarchy of these glyphset/glyph objects and make sure that we continue to return a `_TTGlyphSet` that contains generic `_TTGlyph` objects and work around their respective differences inside subclasses.\r\nI think it's doable.\n", "before_files": [{"content": "\"\"\"GlyphSets returned by a TTFont.\"\"\"\n\nfrom fontTools.misc.fixedTools import otRound\nfrom copy import copy\n\nclass _TTGlyphSet(object):\n\n\t\"\"\"Generic dict-like GlyphSet class that pulls metrics from hmtx and\n\tglyph shape from TrueType or CFF.\n\t\"\"\"\n\n\tdef __init__(self, ttFont, glyphs, glyphType):\n\t\t\"\"\"Construct a new glyphset.\n\n\t\tArgs:\n\t\t\tfont (TTFont): The font object (used to get metrics).\n\t\t\tglyphs (dict): A dictionary mapping glyph names to ``_TTGlyph`` objects.\n\t\t\tglyphType (class): Either ``_TTGlyphCFF`` or ``_TTGlyphGlyf``.\n\t\t\"\"\"\n\t\tself._glyphs = glyphs\n\t\tself._hmtx = ttFont['hmtx']\n\t\tself._vmtx = ttFont['vmtx'] if 'vmtx' in ttFont else None\n\t\tself._glyphType = glyphType\n\n\tdef keys(self):\n\t\treturn list(self._glyphs.keys())\n\n\tdef has_key(self, glyphName):\n\t\treturn glyphName in self._glyphs\n\n\t__contains__ = has_key\n\n\tdef __getitem__(self, glyphName):\n\t\thorizontalMetrics = self._hmtx[glyphName]\n\t\tverticalMetrics = self._vmtx[glyphName] if self._vmtx else None\n\t\treturn self._glyphType(\n\t\t\tself, self._glyphs[glyphName], horizontalMetrics, verticalMetrics)\n\n\tdef __len__(self):\n\t\treturn len(self._glyphs)\n\n\tdef get(self, glyphName, default=None):\n\t\ttry:\n\t\t\treturn self[glyphName]\n\t\texcept KeyError:\n\t\t\treturn default\n\nclass _TTGlyph(object):\n\n\t\"\"\"Wrapper for a TrueType glyph that supports the Pen protocol, meaning\n\tthat it has .draw() and .drawPoints() methods that take a pen object as\n\ttheir only argument. Additionally there are 'width' and 'lsb' attributes,\n\tread from the 'hmtx' table.\n\n\tIf the font contains a 'vmtx' table, there will also be 'height' and 'tsb'\n\tattributes.\n\t\"\"\"\n\n\tdef __init__(self, glyphset, glyph, horizontalMetrics, verticalMetrics=None):\n\t\t\"\"\"Construct a new _TTGlyph.\n\n\t\tArgs:\n\t\t\tglyphset (_TTGlyphSet): A glyphset object used to resolve components.\n\t\t\tglyph (ttLib.tables._g_l_y_f.Glyph): The glyph object.\n\t\t\thorizontalMetrics (int, int): The glyph's width and left sidebearing.\n\t\t\"\"\"\n\t\tself._glyphset = glyphset\n\t\tself._glyph = glyph\n\t\tself.width, self.lsb = horizontalMetrics\n\t\tif verticalMetrics:\n\t\t\tself.height, self.tsb = verticalMetrics\n\t\telse:\n\t\t\tself.height, self.tsb = None, None\n\n\tdef draw(self, pen):\n\t\t\"\"\"Draw the glyph onto ``pen``. See fontTools.pens.basePen for details\n\t\thow that works.\n\t\t\"\"\"\n\t\tself._glyph.draw(pen)\n\n\tdef drawPoints(self, pen):\n\t\t# drawPoints is only implemented for _TTGlyphGlyf at this time.\n\t\traise NotImplementedError()\n\nclass _TTGlyphCFF(_TTGlyph):\n\tpass\n\nclass _TTGlyphGlyf(_TTGlyph):\n\n\tdef draw(self, pen):\n\t\t\"\"\"Draw the glyph onto Pen. See fontTools.pens.basePen for details\n\t\thow that works.\n\t\t\"\"\"\n\t\tglyfTable = self._glyphset._glyphs\n\t\tglyph = self._glyph\n\t\toffset = self.lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.draw(pen, glyfTable, offset)\n\n\tdef drawPoints(self, pen):\n\t\t\"\"\"Draw the glyph onto PointPen. See fontTools.pens.pointPen\n\t\tfor details how that works.\n\t\t\"\"\"\n\t\tglyfTable = self._glyphset._glyphs\n\t\tglyph = self._glyph\n\t\toffset = self.lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.drawPoints(pen, glyfTable, offset)\n\n\n\nclass _TTVarGlyphSet(object):\n\n\tdef __init__(self, font, location, normalized=False):\n\t\tfrom fontTools.varLib.models import normalizeLocation, piecewiseLinearMap\n\t\tself._ttFont = font\n\t\tif not normalized:\n\t\t\taxes = {a.axisTag: (a.minValue, a.defaultValue, a.maxValue) for a in font['fvar'].axes}\n\t\t\tlocation = normalizeLocation(location, axes)\n\t\t\tif 'avar' in font:\n\t\t\t\tavar = font['avar']\n\t\t\t\tavarSegments = avar.segments\n\t\t\t\tnew_location = {}\n\t\t\t\tfor axis_tag,value in location.items():\n\t\t\t\t\tavarMapping = avarSegments.get(axis_tag, None)\n\t\t\t\t\tif avarMapping is not None:\n\t\t\t\t\t\tvalue = piecewiseLinearMap(value, avarMapping)\n\t\t\t\t\tnew_location[axis_tag] = value\n\t\t\t\tlocation = new_location\n\t\t\t\tdel new_location\n\n\t\tself.location = location\n\n\tdef keys(self):\n\t\treturn list(self._ttFont['glyf'].keys())\n\n\tdef has_key(self, glyphName):\n\t\treturn glyphName in self._ttFont['glyf']\n\t__contains__ = has_key\n\n\tdef __getitem__(self, glyphName):\n\t\treturn _TTVarGlyphGlyf(self._ttFont, glyphName, self.location)\n\n\tdef get(self, glyphName, default=None):\n\t\ttry:\n\t\t\treturn self[glyphName]\n\t\texcept KeyError:\n\t\t\treturn default\n\ndef _setCoordinates(glyph, coord, glyfTable):\n\t# Handle phantom points for (left, right, top, bottom) positions.\n\tassert len(coord) >= 4\n\tif not hasattr(glyph, 'xMin'):\n\t\tglyph.recalcBounds(glyfTable)\n\tleftSideX = coord[-4][0]\n\trightSideX = coord[-3][0]\n\ttopSideY = coord[-2][1]\n\tbottomSideY = coord[-1][1]\n\n\tfor _ in range(4):\n\t\tdel coord[-1]\n\n\tif glyph.isComposite():\n\t\tassert len(coord) == len(glyph.components)\n\t\tfor p,comp in zip(coord, glyph.components):\n\t\t\tif hasattr(comp, 'x'):\n\t\t\t\tcomp.x,comp.y = p\n\telif glyph.numberOfContours == 0:\n\t\tassert len(coord) == 0\n\telse:\n\t\tassert len(coord) == len(glyph.coordinates)\n\t\tglyph.coordinates = coord\n\n\tglyph.recalcBounds(glyfTable)\n\n\thorizontalAdvanceWidth = otRound(rightSideX - leftSideX)\n\tverticalAdvanceWidth = otRound(topSideY - bottomSideY)\n\tleftSideBearing = otRound(glyph.xMin - leftSideX)\n\treturn horizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth\n\n\nclass _TTVarGlyphGlyf(object):\n\n\tdef __init__(self, ttFont, glyphName, location):\n\t\tself._ttFont = ttFont\n\t\tself._glyphName = glyphName\n\t\tself._location = location\n\t\tself.width = None # draw fills it in\n\n\tdef draw(self, pen):\n\t\tfrom fontTools.varLib.iup import iup_delta\n\t\tfrom fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates\n\t\tfrom fontTools.varLib.models import supportScalar\n\n\t\tglyf = self._ttFont['glyf']\n\t\thMetrics = self._ttFont['hmtx'].metrics\n\t\tvMetrics = getattr(self._ttFont.get('vmtx'), 'metrics', None)\n\n\t\tvariations = self._ttFont['gvar'].variations[self._glyphName]\n\t\tcoordinates, _ = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)\n\t\torigCoords, endPts = None, None\n\t\tfor var in variations:\n\t\t\tscalar = supportScalar(self._location, var.axes)\n\t\t\tif not scalar:\n\t\t\t\tcontinue\n\t\t\tdelta = var.coordinates\n\t\t\tif None in delta:\n\t\t\t\tif origCoords is None:\n\t\t\t\t\torigCoords,control = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)\n\t\t\t\t\tendPts = control[1] if control[0] >= 1 else list(range(len(control[1])))\n\t\t\t\tdelta = iup_delta(delta, origCoords, endPts)\n\t\t\tcoordinates += GlyphCoordinates(delta) * scalar\n\n\t\tglyph = copy(glyf[self._glyphName]) # Shallow copy\n\t\thorizontalAdvanceWidth, leftSideBearing, verticalAdvanceWidth = _setCoordinates(glyph, coordinates, glyf)\n\t\tself.width = horizontalAdvanceWidth\n\t\tself.height = verticalAdvanceWidth\n\t\toffset = leftSideBearing - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.draw(pen, glyf, offset)\n", "path": "Lib/fontTools/ttLib/ttGlyphSet.py"}], "after_files": [{"content": "\"\"\"GlyphSets returned by a TTFont.\"\"\"\n\nfrom fontTools.misc.fixedTools import otRound\nfrom copy import copy\n\nclass _TTGlyphSet(object):\n\n\t\"\"\"Generic dict-like GlyphSet class that pulls metrics from hmtx and\n\tglyph shape from TrueType or CFF.\n\t\"\"\"\n\n\tdef __init__(self, ttFont, glyphs, glyphType):\n\t\t\"\"\"Construct a new glyphset.\n\n\t\tArgs:\n\t\t\tfont (TTFont): The font object (used to get metrics).\n\t\t\tglyphs (dict): A dictionary mapping glyph names to ``_TTGlyph`` objects.\n\t\t\tglyphType (class): Either ``_TTGlyphCFF`` or ``_TTGlyphGlyf``.\n\t\t\"\"\"\n\t\tself._glyphs = glyphs\n\t\tself._hmtx = ttFont['hmtx']\n\t\tself._vmtx = ttFont['vmtx'] if 'vmtx' in ttFont else None\n\t\tself._glyphType = glyphType\n\n\tdef keys(self):\n\t\treturn list(self._glyphs.keys())\n\n\tdef has_key(self, glyphName):\n\t\treturn glyphName in self._glyphs\n\n\t__contains__ = has_key\n\n\tdef __getitem__(self, glyphName):\n\t\thorizontalMetrics = self._hmtx[glyphName]\n\t\tverticalMetrics = self._vmtx[glyphName] if self._vmtx else None\n\t\treturn self._glyphType(\n\t\t\tself, self._glyphs[glyphName], horizontalMetrics, verticalMetrics)\n\n\tdef __len__(self):\n\t\treturn len(self._glyphs)\n\n\tdef get(self, glyphName, default=None):\n\t\ttry:\n\t\t\treturn self[glyphName]\n\t\texcept KeyError:\n\t\t\treturn default\n\nclass _TTGlyph(object):\n\n\t\"\"\"Wrapper for a TrueType glyph that supports the Pen protocol, meaning\n\tthat it has .draw() and .drawPoints() methods that take a pen object as\n\ttheir only argument. Additionally there are 'width' and 'lsb' attributes,\n\tread from the 'hmtx' table.\n\n\tIf the font contains a 'vmtx' table, there will also be 'height' and 'tsb'\n\tattributes.\n\t\"\"\"\n\n\tdef __init__(self, glyphset, glyph, horizontalMetrics, verticalMetrics=None):\n\t\t\"\"\"Construct a new _TTGlyph.\n\n\t\tArgs:\n\t\t\tglyphset (_TTGlyphSet): A glyphset object used to resolve components.\n\t\t\tglyph (ttLib.tables._g_l_y_f.Glyph): The glyph object.\n\t\t\thorizontalMetrics (int, int): The glyph's width and left sidebearing.\n\t\t\"\"\"\n\t\tself._glyphset = glyphset\n\t\tself._glyph = glyph\n\t\tself.width, self.lsb = horizontalMetrics\n\t\tif verticalMetrics:\n\t\t\tself.height, self.tsb = verticalMetrics\n\t\telse:\n\t\t\tself.height, self.tsb = None, None\n\n\tdef draw(self, pen):\n\t\t\"\"\"Draw the glyph onto ``pen``. See fontTools.pens.basePen for details\n\t\thow that works.\n\t\t\"\"\"\n\t\tself._glyph.draw(pen)\n\n\tdef drawPoints(self, pen):\n\t\t# drawPoints is only implemented for _TTGlyphGlyf at this time.\n\t\traise NotImplementedError()\n\nclass _TTGlyphCFF(_TTGlyph):\n\tpass\n\nclass _TTGlyphGlyf(_TTGlyph):\n\n\tdef draw(self, pen):\n\t\t\"\"\"Draw the glyph onto Pen. See fontTools.pens.basePen for details\n\t\thow that works.\n\t\t\"\"\"\n\t\tglyfTable = self._glyphset._glyphs\n\t\tglyph = self._glyph\n\t\toffset = self.lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.draw(pen, glyfTable, offset)\n\n\tdef drawPoints(self, pen):\n\t\t\"\"\"Draw the glyph onto PointPen. See fontTools.pens.pointPen\n\t\tfor details how that works.\n\t\t\"\"\"\n\t\tglyfTable = self._glyphset._glyphs\n\t\tglyph = self._glyph\n\t\toffset = self.lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.drawPoints(pen, glyfTable, offset)\n\n\n\nclass _TTVarGlyphSet(_TTGlyphSet):\n\n\tdef __init__(self, font, location, normalized=False):\n\t\tself._ttFont = font\n\t\tself._glyphs = font['glyf']\n\n\t\tif not normalized:\n\t\t\tfrom fontTools.varLib.models import normalizeLocation, piecewiseLinearMap\n\n\t\t\taxes = {a.axisTag: (a.minValue, a.defaultValue, a.maxValue) for a in font['fvar'].axes}\n\t\t\tlocation = normalizeLocation(location, axes)\n\t\t\tif 'avar' in font:\n\t\t\t\tavar = font['avar']\n\t\t\t\tavarSegments = avar.segments\n\t\t\t\tnew_location = {}\n\t\t\t\tfor axis_tag, value in location.items():\n\t\t\t\t\tavarMapping = avarSegments.get(axis_tag, None)\n\t\t\t\t\tif avarMapping is not None:\n\t\t\t\t\t\tvalue = piecewiseLinearMap(value, avarMapping)\n\t\t\t\t\tnew_location[axis_tag] = value\n\t\t\t\tlocation = new_location\n\t\t\t\tdel new_location\n\n\t\tself.location = location\n\n\tdef __getitem__(self, glyphName):\n\t\tif glyphName not in self._glyphs:\n\t\t\traise KeyError(glyphName)\n\t\treturn _TTVarGlyphGlyf(self._ttFont, glyphName, self.location)\n\n\ndef _setCoordinates(glyph, coord, glyfTable):\n\t# Handle phantom points for (left, right, top, bottom) positions.\n\tassert len(coord) >= 4\n\tif not hasattr(glyph, 'xMin'):\n\t\tglyph.recalcBounds(glyfTable)\n\tleftSideX = coord[-4][0]\n\trightSideX = coord[-3][0]\n\ttopSideY = coord[-2][1]\n\tbottomSideY = coord[-1][1]\n\n\tfor _ in range(4):\n\t\tdel coord[-1]\n\n\tif glyph.isComposite():\n\t\tassert len(coord) == len(glyph.components)\n\t\tfor p,comp in zip(coord, glyph.components):\n\t\t\tif hasattr(comp, 'x'):\n\t\t\t\tcomp.x,comp.y = p\n\telif glyph.numberOfContours == 0:\n\t\tassert len(coord) == 0\n\telse:\n\t\tassert len(coord) == len(glyph.coordinates)\n\t\tglyph.coordinates = coord\n\n\tglyph.recalcBounds(glyfTable)\n\n\thorizontalAdvanceWidth = otRound(rightSideX - leftSideX)\n\tverticalAdvanceWidth = otRound(topSideY - bottomSideY)\n\tleftSideBearing = otRound(glyph.xMin - leftSideX)\n\ttopSideBearing = otRound(topSideY - glyph.yMax)\n\treturn (\n\t\thorizontalAdvanceWidth,\n\t\tleftSideBearing,\n\t\tverticalAdvanceWidth,\n\t\ttopSideBearing,\n\t)\n\n\nclass _TTVarGlyph(_TTGlyph):\n\tdef __init__(self, ttFont, glyphName, location):\n\t\tself._ttFont = ttFont\n\t\tself._glyphName = glyphName\n\t\tself._location = location\n\t\t# draw() fills these in\n\t\tself.width = self.height = self.lsb = self.tsb = None\n\n\nclass _TTVarGlyphGlyf(_TTVarGlyph):\n\n\tdef draw(self, pen):\n\t\tfrom fontTools.varLib.iup import iup_delta\n\t\tfrom fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates\n\t\tfrom fontTools.varLib.models import supportScalar\n\n\t\tglyf = self._ttFont['glyf']\n\t\thMetrics = self._ttFont['hmtx'].metrics\n\t\tvMetrics = getattr(self._ttFont.get('vmtx'), 'metrics', None)\n\n\t\tvariations = self._ttFont['gvar'].variations[self._glyphName]\n\t\tcoordinates, _ = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)\n\t\torigCoords, endPts = None, None\n\t\tfor var in variations:\n\t\t\tscalar = supportScalar(self._location, var.axes)\n\t\t\tif not scalar:\n\t\t\t\tcontinue\n\t\t\tdelta = var.coordinates\n\t\t\tif None in delta:\n\t\t\t\tif origCoords is None:\n\t\t\t\t\torigCoords,control = glyf._getCoordinatesAndControls(self._glyphName, hMetrics, vMetrics)\n\t\t\t\t\tendPts = control[1] if control[0] >= 1 else list(range(len(control[1])))\n\t\t\t\tdelta = iup_delta(delta, origCoords, endPts)\n\t\t\tcoordinates += GlyphCoordinates(delta) * scalar\n\n\t\tglyph = copy(glyf[self._glyphName]) # Shallow copy\n\t\twidth, lsb, height, tsb = _setCoordinates(glyph, coordinates, glyf)\n\t\tself.width = width\n\t\tself.lsb = lsb\n\t\tself.height = height\n\t\tself.tsb = tsb\n\t\toffset = lsb - glyph.xMin if hasattr(glyph, \"xMin\") else 0\n\t\tglyph.draw(pen, glyf, offset)\n", "path": "Lib/fontTools/ttLib/ttGlyphSet.py"}]} | 2,978 | 965 |
gh_patches_debug_31910 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-1947 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conda env creation fails if Python 3 selected
I put this in `readthedocs.yml`:
``` yaml
conda:
file: docs/conda-env.yml
python:
version: 3
setup_py_install: true
```
It creates a conda env with Python 3.5, and then tries to install the standard docs machinery. But docutils is pinned to 0.11, and there isn't a build of this for Python 3.5 (there is a package of docutils 0.12). So I see this failure:
```
conda install --yes --name docs-build-w-conda sphinx==1.3.1 Pygments==2.0.2 docutils==0.11 mock==1.0.1 pillow==3.0.0 sphinx_rtd_theme==0.1.7 alabaster>=0.7,<0.8,!=0.7.5
Fetching package metadata: /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
....
Solving package specifications: ..
Error: Unsatisfiable package specifications.
Generating hint:
[ ]| | 0%
[2/8 ]|############ | 25%
[3/8 ]|################## | 37%
[5/8 ]|############################### | 62%
[6/8 ]|##################################### | 75%
[ COMPLETE ]|##################################################| 100%
Hint: the following packages conflict with each other:
- docutils ==0.11
- python 3.5*
Use 'conda info docutils' etc. to see the dependencies for each package.
```
The obvious solution is not to pin docutils so it automatically picks the latest version available. If you prefer to keep it pinned, I think that installing dependencies at the same time as you create the environment should work; in this instance, it would fall back to Python 3.4 so it could satisfy the dependencies:
```
conda create --yes --name docs-build-w-conda python=3 sphinx==1.3.1 Pygments==2.0.2 docutils==0.11 ...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/doc_builder/python_environments.py`
Content:
```
1 import logging
2 import os
3 import shutil
4
5 from django.conf import settings
6
7 from readthedocs.doc_builder.config import ConfigWrapper
8 from readthedocs.doc_builder.loader import get_builder_class
9 from readthedocs.projects.constants import LOG_TEMPLATE
10
11 log = logging.getLogger(__name__)
12
13
14 class PythonEnvironment(object):
15
16 def __init__(self, version, build_env, config=None):
17 self.version = version
18 self.project = version.project
19 self.build_env = build_env
20 if config:
21 self.config = config
22 else:
23 self.config = ConfigWrapper(version=version, yaml_config={})
24 # Compute here, since it's used a lot
25 self.checkout_path = self.project.checkout_path(self.version.slug)
26
27 def _log(self, msg):
28 log.info(LOG_TEMPLATE
29 .format(project=self.project.slug,
30 version=self.version.slug,
31 msg=msg))
32
33 def delete_existing_build_dir(self):
34
35 # Handle deleting old build dir
36 build_dir = os.path.join(
37 self.venv_path(),
38 'build')
39 if os.path.exists(build_dir):
40 self._log('Removing existing build directory')
41 shutil.rmtree(build_dir)
42
43 def install_package(self):
44 setup_path = os.path.join(self.checkout_path, 'setup.py')
45 if os.path.isfile(setup_path) and self.config.install_project:
46 if getattr(settings, 'USE_PIP_INSTALL', False):
47 self.build_env.run(
48 'python',
49 self.venv_bin(filename='pip'),
50 'install',
51 '--ignore-installed',
52 '--cache-dir',
53 self.project.pip_cache_path,
54 '.',
55 cwd=self.checkout_path,
56 bin_path=self.venv_bin()
57 )
58 else:
59 self.build_env.run(
60 'python',
61 'setup.py',
62 'install',
63 '--force',
64 cwd=self.checkout_path,
65 bin_path=self.venv_bin()
66 )
67
68 def venv_bin(self, filename=None):
69 """Return path to the virtualenv bin path, or a specific binary
70
71 :param filename: If specified, add this filename to the path return
72 :returns: Path to virtualenv bin or filename in virtualenv bin
73 """
74 parts = [self.venv_path(), 'bin']
75 if filename is not None:
76 parts.append(filename)
77 return os.path.join(*parts)
78
79
80 class Virtualenv(PythonEnvironment):
81
82 def venv_path(self):
83 return os.path.join(self.project.doc_path, 'envs', self.version.slug)
84
85 def setup_base(self):
86 site_packages = '--no-site-packages'
87 if self.config.use_system_site_packages:
88 site_packages = '--system-site-packages'
89 env_path = self.venv_path()
90 self.build_env.run(
91 self.config.python_interpreter,
92 '-mvirtualenv',
93 site_packages,
94 env_path,
95 bin_path=None, # Don't use virtualenv bin that doesn't exist yet
96 )
97
98 def install_core_requirements(self):
99 requirements = [
100 'sphinx==1.3.1',
101 'Pygments==2.0.2',
102 'setuptools==18.6.1',
103 'docutils==0.11',
104 'mkdocs==0.14.0',
105 'mock==1.0.1',
106 'pillow==2.6.1',
107 'readthedocs-sphinx-ext==0.5.4',
108 'sphinx-rtd-theme==0.1.9',
109 'alabaster>=0.7,<0.8,!=0.7.5',
110 'commonmark==0.5.4',
111 'recommonmark==0.1.1',
112 ]
113
114 cmd = [
115 'python',
116 self.venv_bin(filename='pip'),
117 'install',
118 '--use-wheel',
119 '-U',
120 '--cache-dir',
121 self.project.pip_cache_path,
122 ]
123 if self.config.use_system_site_packages:
124 # Other code expects sphinx-build to be installed inside the
125 # virtualenv. Using the -I option makes sure it gets installed
126 # even if it is already installed system-wide (and
127 # --system-site-packages is used)
128 cmd.append('-I')
129 cmd.extend(requirements)
130 self.build_env.run(
131 *cmd,
132 bin_path=self.venv_bin()
133 )
134
135 def install_user_requirements(self):
136 requirements_file_path = self.config.requirements_file
137 if not requirements_file_path:
138 builder_class = get_builder_class(self.project.documentation_type)
139 docs_dir = (builder_class(build_env=self.build_env, python_env=self)
140 .docs_dir())
141 for path in [docs_dir, '']:
142 for req_file in ['pip_requirements.txt', 'requirements.txt']:
143 test_path = os.path.join(self.checkout_path, path, req_file)
144 if os.path.exists(test_path):
145 requirements_file_path = test_path
146 break
147
148 if requirements_file_path:
149 self.build_env.run(
150 'python',
151 self.venv_bin(filename='pip'),
152 'install',
153 '--exists-action=w',
154 '--cache-dir',
155 self.project.pip_cache_path,
156 '-r{0}'.format(requirements_file_path),
157 cwd=self.checkout_path,
158 bin_path=self.venv_bin()
159 )
160
161
162 class Conda(PythonEnvironment):
163
164 def venv_path(self):
165 return os.path.join(self.project.doc_path, 'conda', self.version.slug)
166
167 def setup_base(self):
168 conda_env_path = os.path.join(self.project.doc_path, 'conda')
169 version_path = os.path.join(conda_env_path, self.version.slug)
170
171 if os.path.exists(version_path):
172 # Re-create conda directory each time to keep fresh state
173 self._log('Removing existing conda directory')
174 shutil.rmtree(version_path)
175 self.build_env.run(
176 'conda',
177 'create',
178 '--yes',
179 '--name',
180 self.version.slug,
181 'python={python_version}'.format(python_version=self.config.python_version),
182 bin_path=None, # Don't use conda bin that doesn't exist yet
183 )
184
185 def install_core_requirements(self):
186 conda_env_path = os.path.join(self.project.doc_path, 'conda')
187
188 # Use conda for requirements it packages
189 requirements = [
190 'sphinx==1.3.1',
191 'Pygments==2.0.2',
192 'docutils==0.11',
193 'mock==1.0.1',
194 'pillow==3.0.0',
195 'sphinx_rtd_theme==0.1.7',
196 'alabaster>=0.7,<0.8,!=0.7.5',
197 ]
198
199 cmd = [
200 'conda',
201 'install',
202 '--yes',
203 '--name',
204 self.version.slug,
205 ]
206 cmd.extend(requirements)
207 self.build_env.run(
208 *cmd
209 )
210
211 # Install pip-only things.
212 pip_requirements = [
213 'mkdocs==0.14.0',
214 'readthedocs-sphinx-ext==0.5.4',
215 'commonmark==0.5.4',
216 'recommonmark==0.1.1',
217 ]
218
219 pip_cmd = [
220 'python',
221 self.venv_bin(filename='pip'),
222 'install',
223 '-U',
224 '--cache-dir',
225 self.project.pip_cache_path,
226 ]
227 pip_cmd.extend(pip_requirements)
228 self.build_env.run(
229 *pip_cmd,
230 bin_path=self.venv_bin()
231 )
232
233 def install_user_requirements(self):
234 conda_env_path = os.path.join(self.project.doc_path, 'conda')
235 self.build_env.run(
236 'conda',
237 'env',
238 'update',
239 '--name',
240 self.version.slug,
241 '--file',
242 self.config.conda_file,
243 )
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/doc_builder/python_environments.py b/readthedocs/doc_builder/python_environments.py
--- a/readthedocs/doc_builder/python_environments.py
+++ b/readthedocs/doc_builder/python_environments.py
@@ -100,7 +100,7 @@
'sphinx==1.3.1',
'Pygments==2.0.2',
'setuptools==18.6.1',
- 'docutils==0.11',
+ 'docutils==0.12',
'mkdocs==0.14.0',
'mock==1.0.1',
'pillow==2.6.1',
@@ -174,22 +174,22 @@
shutil.rmtree(version_path)
self.build_env.run(
'conda',
+ 'env',
'create',
- '--yes',
'--name',
self.version.slug,
- 'python={python_version}'.format(python_version=self.config.python_version),
+ '--file',
+ self.config.conda_file,
bin_path=None, # Don't use conda bin that doesn't exist yet
)
def install_core_requirements(self):
- conda_env_path = os.path.join(self.project.doc_path, 'conda')
# Use conda for requirements it packages
requirements = [
'sphinx==1.3.1',
'Pygments==2.0.2',
- 'docutils==0.11',
+ 'docutils==0.12',
'mock==1.0.1',
'pillow==3.0.0',
'sphinx_rtd_theme==0.1.7',
@@ -231,7 +231,6 @@
)
def install_user_requirements(self):
- conda_env_path = os.path.join(self.project.doc_path, 'conda')
self.build_env.run(
'conda',
'env',
| {"golden_diff": "diff --git a/readthedocs/doc_builder/python_environments.py b/readthedocs/doc_builder/python_environments.py\n--- a/readthedocs/doc_builder/python_environments.py\n+++ b/readthedocs/doc_builder/python_environments.py\n@@ -100,7 +100,7 @@\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n 'setuptools==18.6.1',\n- 'docutils==0.11',\n+ 'docutils==0.12',\n 'mkdocs==0.14.0',\n 'mock==1.0.1',\n 'pillow==2.6.1',\n@@ -174,22 +174,22 @@\n shutil.rmtree(version_path)\n self.build_env.run(\n 'conda',\n+ 'env',\n 'create',\n- '--yes',\n '--name',\n self.version.slug,\n- 'python={python_version}'.format(python_version=self.config.python_version),\n+ '--file',\n+ self.config.conda_file,\n bin_path=None, # Don't use conda bin that doesn't exist yet\n )\n \n def install_core_requirements(self):\n- conda_env_path = os.path.join(self.project.doc_path, 'conda')\n \n # Use conda for requirements it packages\n requirements = [\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n- 'docutils==0.11',\n+ 'docutils==0.12',\n 'mock==1.0.1',\n 'pillow==3.0.0',\n 'sphinx_rtd_theme==0.1.7',\n@@ -231,7 +231,6 @@\n )\n \n def install_user_requirements(self):\n- conda_env_path = os.path.join(self.project.doc_path, 'conda')\n self.build_env.run(\n 'conda',\n 'env',\n", "issue": "conda env creation fails if Python 3 selected\nI put this in `readthedocs.yml`:\n\n``` yaml\nconda:\n file: docs/conda-env.yml\npython:\n version: 3\n setup_py_install: true\n```\n\nIt creates a conda env with Python 3.5, and then tries to install the standard docs machinery. But docutils is pinned to 0.11, and there isn't a build of this for Python 3.5 (there is a package of docutils 0.12). So I see this failure:\n\n```\nconda install --yes --name docs-build-w-conda sphinx==1.3.1 Pygments==2.0.2 docutils==0.11 mock==1.0.1 pillow==3.0.0 sphinx_rtd_theme==0.1.7 alabaster>=0.7,<0.8,!=0.7.5\nFetching package metadata: /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.\n SNIMissingWarning\n/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.\n InsecurePlatformWarning\n....\nSolving package specifications: ..\nError: Unsatisfiable package specifications.\nGenerating hint: \n[ ]| | 0%\n[2/8 ]|############ | 25%\n[3/8 ]|################## | 37%\n[5/8 ]|############################### | 62%\n[6/8 ]|##################################### | 75%\n[ COMPLETE ]|##################################################| 100%\n\n\n\nHint: the following packages conflict with each other:\n - docutils ==0.11\n - python 3.5*\n\nUse 'conda info docutils' etc. to see the dependencies for each package.\n```\n\nThe obvious solution is not to pin docutils so it automatically picks the latest version available. If you prefer to keep it pinned, I think that installing dependencies at the same time as you create the environment should work; in this instance, it would fall back to Python 3.4 so it could satisfy the dependencies:\n\n```\nconda create --yes --name docs-build-w-conda python=3 sphinx==1.3.1 Pygments==2.0.2 docutils==0.11 ...\n```\n\n", "before_files": [{"content": "import logging\nimport os\nimport shutil\n\nfrom django.conf import settings\n\nfrom readthedocs.doc_builder.config import ConfigWrapper\nfrom readthedocs.doc_builder.loader import get_builder_class\nfrom readthedocs.projects.constants import LOG_TEMPLATE\n\nlog = logging.getLogger(__name__)\n\n\nclass PythonEnvironment(object):\n\n def __init__(self, version, build_env, config=None):\n self.version = version\n self.project = version.project\n self.build_env = build_env\n if config:\n self.config = config\n else:\n self.config = ConfigWrapper(version=version, yaml_config={})\n # Compute here, since it's used a lot\n self.checkout_path = self.project.checkout_path(self.version.slug)\n\n def _log(self, msg):\n log.info(LOG_TEMPLATE\n .format(project=self.project.slug,\n version=self.version.slug,\n msg=msg))\n\n def delete_existing_build_dir(self):\n\n # Handle deleting old build dir\n build_dir = os.path.join(\n self.venv_path(),\n 'build')\n if os.path.exists(build_dir):\n self._log('Removing existing build directory')\n shutil.rmtree(build_dir)\n\n def install_package(self):\n setup_path = os.path.join(self.checkout_path, 'setup.py')\n if os.path.isfile(setup_path) and self.config.install_project:\n if getattr(settings, 'USE_PIP_INSTALL', False):\n self.build_env.run(\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--ignore-installed',\n '--cache-dir',\n self.project.pip_cache_path,\n '.',\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n else:\n self.build_env.run(\n 'python',\n 'setup.py',\n 'install',\n '--force',\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n\n def venv_bin(self, filename=None):\n \"\"\"Return path to the virtualenv bin path, or a specific binary\n\n :param filename: If specified, add this filename to the path return\n :returns: Path to virtualenv bin or filename in virtualenv bin\n \"\"\"\n parts = [self.venv_path(), 'bin']\n if filename is not None:\n parts.append(filename)\n return os.path.join(*parts)\n\n\nclass Virtualenv(PythonEnvironment):\n\n def venv_path(self):\n return os.path.join(self.project.doc_path, 'envs', self.version.slug)\n\n def setup_base(self):\n site_packages = '--no-site-packages'\n if self.config.use_system_site_packages:\n site_packages = '--system-site-packages'\n env_path = self.venv_path()\n self.build_env.run(\n self.config.python_interpreter,\n '-mvirtualenv',\n site_packages,\n env_path,\n bin_path=None, # Don't use virtualenv bin that doesn't exist yet\n )\n\n def install_core_requirements(self):\n requirements = [\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n 'setuptools==18.6.1',\n 'docutils==0.11',\n 'mkdocs==0.14.0',\n 'mock==1.0.1',\n 'pillow==2.6.1',\n 'readthedocs-sphinx-ext==0.5.4',\n 'sphinx-rtd-theme==0.1.9',\n 'alabaster>=0.7,<0.8,!=0.7.5',\n 'commonmark==0.5.4',\n 'recommonmark==0.1.1',\n ]\n\n cmd = [\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--use-wheel',\n '-U',\n '--cache-dir',\n self.project.pip_cache_path,\n ]\n if self.config.use_system_site_packages:\n # Other code expects sphinx-build to be installed inside the\n # virtualenv. Using the -I option makes sure it gets installed\n # even if it is already installed system-wide (and\n # --system-site-packages is used)\n cmd.append('-I')\n cmd.extend(requirements)\n self.build_env.run(\n *cmd,\n bin_path=self.venv_bin()\n )\n\n def install_user_requirements(self):\n requirements_file_path = self.config.requirements_file\n if not requirements_file_path:\n builder_class = get_builder_class(self.project.documentation_type)\n docs_dir = (builder_class(build_env=self.build_env, python_env=self)\n .docs_dir())\n for path in [docs_dir, '']:\n for req_file in ['pip_requirements.txt', 'requirements.txt']:\n test_path = os.path.join(self.checkout_path, path, req_file)\n if os.path.exists(test_path):\n requirements_file_path = test_path\n break\n\n if requirements_file_path:\n self.build_env.run(\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--exists-action=w',\n '--cache-dir',\n self.project.pip_cache_path,\n '-r{0}'.format(requirements_file_path),\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n\n\nclass Conda(PythonEnvironment):\n\n def venv_path(self):\n return os.path.join(self.project.doc_path, 'conda', self.version.slug)\n\n def setup_base(self):\n conda_env_path = os.path.join(self.project.doc_path, 'conda')\n version_path = os.path.join(conda_env_path, self.version.slug)\n\n if os.path.exists(version_path):\n # Re-create conda directory each time to keep fresh state\n self._log('Removing existing conda directory')\n shutil.rmtree(version_path)\n self.build_env.run(\n 'conda',\n 'create',\n '--yes',\n '--name',\n self.version.slug,\n 'python={python_version}'.format(python_version=self.config.python_version),\n bin_path=None, # Don't use conda bin that doesn't exist yet\n )\n\n def install_core_requirements(self):\n conda_env_path = os.path.join(self.project.doc_path, 'conda')\n\n # Use conda for requirements it packages\n requirements = [\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n 'docutils==0.11',\n 'mock==1.0.1',\n 'pillow==3.0.0',\n 'sphinx_rtd_theme==0.1.7',\n 'alabaster>=0.7,<0.8,!=0.7.5',\n ]\n\n cmd = [\n 'conda',\n 'install',\n '--yes',\n '--name',\n self.version.slug,\n ]\n cmd.extend(requirements)\n self.build_env.run(\n *cmd\n )\n\n # Install pip-only things.\n pip_requirements = [\n 'mkdocs==0.14.0',\n 'readthedocs-sphinx-ext==0.5.4',\n 'commonmark==0.5.4',\n 'recommonmark==0.1.1',\n ]\n\n pip_cmd = [\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '-U',\n '--cache-dir',\n self.project.pip_cache_path,\n ]\n pip_cmd.extend(pip_requirements)\n self.build_env.run(\n *pip_cmd,\n bin_path=self.venv_bin()\n )\n\n def install_user_requirements(self):\n conda_env_path = os.path.join(self.project.doc_path, 'conda')\n self.build_env.run(\n 'conda',\n 'env',\n 'update',\n '--name',\n self.version.slug,\n '--file',\n self.config.conda_file,\n )\n", "path": "readthedocs/doc_builder/python_environments.py"}], "after_files": [{"content": "import logging\nimport os\nimport shutil\n\nfrom django.conf import settings\n\nfrom readthedocs.doc_builder.config import ConfigWrapper\nfrom readthedocs.doc_builder.loader import get_builder_class\nfrom readthedocs.projects.constants import LOG_TEMPLATE\n\nlog = logging.getLogger(__name__)\n\n\nclass PythonEnvironment(object):\n\n def __init__(self, version, build_env, config=None):\n self.version = version\n self.project = version.project\n self.build_env = build_env\n if config:\n self.config = config\n else:\n self.config = ConfigWrapper(version=version, yaml_config={})\n # Compute here, since it's used a lot\n self.checkout_path = self.project.checkout_path(self.version.slug)\n\n def _log(self, msg):\n log.info(LOG_TEMPLATE\n .format(project=self.project.slug,\n version=self.version.slug,\n msg=msg))\n\n def delete_existing_build_dir(self):\n\n # Handle deleting old build dir\n build_dir = os.path.join(\n self.venv_path(),\n 'build')\n if os.path.exists(build_dir):\n self._log('Removing existing build directory')\n shutil.rmtree(build_dir)\n\n def install_package(self):\n setup_path = os.path.join(self.checkout_path, 'setup.py')\n if os.path.isfile(setup_path) and self.config.install_project:\n if getattr(settings, 'USE_PIP_INSTALL', False):\n self.build_env.run(\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--ignore-installed',\n '--cache-dir',\n self.project.pip_cache_path,\n '.',\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n else:\n self.build_env.run(\n 'python',\n 'setup.py',\n 'install',\n '--force',\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n\n def venv_bin(self, filename=None):\n \"\"\"Return path to the virtualenv bin path, or a specific binary\n\n :param filename: If specified, add this filename to the path return\n :returns: Path to virtualenv bin or filename in virtualenv bin\n \"\"\"\n parts = [self.venv_path(), 'bin']\n if filename is not None:\n parts.append(filename)\n return os.path.join(*parts)\n\n\nclass Virtualenv(PythonEnvironment):\n\n def venv_path(self):\n return os.path.join(self.project.doc_path, 'envs', self.version.slug)\n\n def setup_base(self):\n site_packages = '--no-site-packages'\n if self.config.use_system_site_packages:\n site_packages = '--system-site-packages'\n env_path = self.venv_path()\n self.build_env.run(\n self.config.python_interpreter,\n '-mvirtualenv',\n site_packages,\n env_path,\n bin_path=None, # Don't use virtualenv bin that doesn't exist yet\n )\n\n def install_core_requirements(self):\n requirements = [\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n 'setuptools==18.6.1',\n 'docutils==0.12',\n 'mkdocs==0.14.0',\n 'mock==1.0.1',\n 'pillow==2.6.1',\n 'readthedocs-sphinx-ext==0.5.4',\n 'sphinx-rtd-theme==0.1.9',\n 'alabaster>=0.7,<0.8,!=0.7.5',\n 'commonmark==0.5.4',\n 'recommonmark==0.1.1',\n ]\n\n cmd = [\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--use-wheel',\n '-U',\n '--cache-dir',\n self.project.pip_cache_path,\n ]\n if self.config.use_system_site_packages:\n # Other code expects sphinx-build to be installed inside the\n # virtualenv. Using the -I option makes sure it gets installed\n # even if it is already installed system-wide (and\n # --system-site-packages is used)\n cmd.append('-I')\n cmd.extend(requirements)\n self.build_env.run(\n *cmd,\n bin_path=self.venv_bin()\n )\n\n def install_user_requirements(self):\n requirements_file_path = self.config.requirements_file\n if not requirements_file_path:\n builder_class = get_builder_class(self.project.documentation_type)\n docs_dir = (builder_class(build_env=self.build_env, python_env=self)\n .docs_dir())\n for path in [docs_dir, '']:\n for req_file in ['pip_requirements.txt', 'requirements.txt']:\n test_path = os.path.join(self.checkout_path, path, req_file)\n if os.path.exists(test_path):\n requirements_file_path = test_path\n break\n\n if requirements_file_path:\n self.build_env.run(\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '--exists-action=w',\n '--cache-dir',\n self.project.pip_cache_path,\n '-r{0}'.format(requirements_file_path),\n cwd=self.checkout_path,\n bin_path=self.venv_bin()\n )\n\n\nclass Conda(PythonEnvironment):\n\n def venv_path(self):\n return os.path.join(self.project.doc_path, 'conda', self.version.slug)\n\n def setup_base(self):\n conda_env_path = os.path.join(self.project.doc_path, 'conda')\n version_path = os.path.join(conda_env_path, self.version.slug)\n\n if os.path.exists(version_path):\n # Re-create conda directory each time to keep fresh state\n self._log('Removing existing conda directory')\n shutil.rmtree(version_path)\n self.build_env.run(\n 'conda',\n 'env',\n 'create',\n '--name',\n self.version.slug,\n '--file',\n self.config.conda_file,\n bin_path=None, # Don't use conda bin that doesn't exist yet\n )\n\n def install_core_requirements(self):\n\n # Use conda for requirements it packages\n requirements = [\n 'sphinx==1.3.1',\n 'Pygments==2.0.2',\n 'docutils==0.12',\n 'mock==1.0.1',\n 'pillow==3.0.0',\n 'sphinx_rtd_theme==0.1.7',\n 'alabaster>=0.7,<0.8,!=0.7.5',\n ]\n\n cmd = [\n 'conda',\n 'install',\n '--yes',\n '--name',\n self.version.slug,\n ]\n cmd.extend(requirements)\n self.build_env.run(\n *cmd\n )\n\n # Install pip-only things.\n pip_requirements = [\n 'mkdocs==0.14.0',\n 'readthedocs-sphinx-ext==0.5.4',\n 'commonmark==0.5.4',\n 'recommonmark==0.1.1',\n ]\n\n pip_cmd = [\n 'python',\n self.venv_bin(filename='pip'),\n 'install',\n '-U',\n '--cache-dir',\n self.project.pip_cache_path,\n ]\n pip_cmd.extend(pip_requirements)\n self.build_env.run(\n *pip_cmd,\n bin_path=self.venv_bin()\n )\n\n def install_user_requirements(self):\n self.build_env.run(\n 'conda',\n 'env',\n 'update',\n '--name',\n self.version.slug,\n '--file',\n self.config.conda_file,\n )\n", "path": "readthedocs/doc_builder/python_environments.py"}]} | 3,240 | 433 |
gh_patches_debug_39037 | rasdani/github-patches | git_diff | pypa__setuptools-1750 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`build_meta.build_sdist` should work if the destination directory already contains a tar.gz
The issue is similar to #1671, see #1745 for how the issue can be resolved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/build_meta.py`
Content:
```
1 """A PEP 517 interface to setuptools
2
3 Previously, when a user or a command line tool (let's call it a "frontend")
4 needed to make a request of setuptools to take a certain action, for
5 example, generating a list of installation requirements, the frontend would
6 would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line.
7
8 PEP 517 defines a different method of interfacing with setuptools. Rather
9 than calling "setup.py" directly, the frontend should:
10
11 1. Set the current directory to the directory with a setup.py file
12 2. Import this module into a safe python interpreter (one in which
13 setuptools can potentially set global variables or crash hard).
14 3. Call one of the functions defined in PEP 517.
15
16 What each function does is defined in PEP 517. However, here is a "casual"
17 definition of the functions (this definition should not be relied on for
18 bug reports or API stability):
19
20 - `build_wheel`: build a wheel in the folder and return the basename
21 - `get_requires_for_build_wheel`: get the `setup_requires` to build
22 - `prepare_metadata_for_build_wheel`: get the `install_requires`
23 - `build_sdist`: build an sdist in the folder and return the basename
24 - `get_requires_for_build_sdist`: get the `setup_requires` to build
25
26 Again, this is not a formal definition! Just a "taste" of the module.
27 """
28
29 import io
30 import os
31 import sys
32 import tokenize
33 import shutil
34 import contextlib
35
36 import setuptools
37 import distutils
38 from setuptools.py31compat import TemporaryDirectory
39
40 from pkg_resources import parse_requirements
41
42 __all__ = ['get_requires_for_build_sdist',
43 'get_requires_for_build_wheel',
44 'prepare_metadata_for_build_wheel',
45 'build_wheel',
46 'build_sdist',
47 '__legacy__',
48 'SetupRequirementsError']
49
50 class SetupRequirementsError(BaseException):
51 def __init__(self, specifiers):
52 self.specifiers = specifiers
53
54
55 class Distribution(setuptools.dist.Distribution):
56 def fetch_build_eggs(self, specifiers):
57 specifier_list = list(map(str, parse_requirements(specifiers)))
58
59 raise SetupRequirementsError(specifier_list)
60
61 @classmethod
62 @contextlib.contextmanager
63 def patch(cls):
64 """
65 Replace
66 distutils.dist.Distribution with this class
67 for the duration of this context.
68 """
69 orig = distutils.core.Distribution
70 distutils.core.Distribution = cls
71 try:
72 yield
73 finally:
74 distutils.core.Distribution = orig
75
76
77 def _to_str(s):
78 """
79 Convert a filename to a string (on Python 2, explicitly
80 a byte string, not Unicode) as distutils checks for the
81 exact type str.
82 """
83 if sys.version_info[0] == 2 and not isinstance(s, str):
84 # Assume it's Unicode, as that's what the PEP says
85 # should be provided.
86 return s.encode(sys.getfilesystemencoding())
87 return s
88
89
90 def _get_immediate_subdirectories(a_dir):
91 return [name for name in os.listdir(a_dir)
92 if os.path.isdir(os.path.join(a_dir, name))]
93
94
95 def _file_with_extension(directory, extension):
96 matching = (
97 f for f in os.listdir(directory)
98 if f.endswith(extension)
99 )
100 file, = matching
101 return file
102
103
104 def _open_setup_script(setup_script):
105 if not os.path.exists(setup_script):
106 # Supply a default setup.py
107 return io.StringIO(u"from setuptools import setup; setup()")
108
109 return getattr(tokenize, 'open', open)(setup_script)
110
111
112 class _BuildMetaBackend(object):
113
114 def _fix_config(self, config_settings):
115 config_settings = config_settings or {}
116 config_settings.setdefault('--global-option', [])
117 return config_settings
118
119 def _get_build_requires(self, config_settings, requirements):
120 config_settings = self._fix_config(config_settings)
121
122 sys.argv = sys.argv[:1] + ['egg_info'] + \
123 config_settings["--global-option"]
124 try:
125 with Distribution.patch():
126 self.run_setup()
127 except SetupRequirementsError as e:
128 requirements += e.specifiers
129
130 return requirements
131
132 def run_setup(self, setup_script='setup.py'):
133 # Note that we can reuse our build directory between calls
134 # Correctness comes first, then optimization later
135 __file__ = setup_script
136 __name__ = '__main__'
137
138 with _open_setup_script(__file__) as f:
139 code = f.read().replace(r'\r\n', r'\n')
140
141 exec(compile(code, __file__, 'exec'), locals())
142
143 def get_requires_for_build_wheel(self, config_settings=None):
144 config_settings = self._fix_config(config_settings)
145 return self._get_build_requires(config_settings, requirements=['wheel'])
146
147 def get_requires_for_build_sdist(self, config_settings=None):
148 config_settings = self._fix_config(config_settings)
149 return self._get_build_requires(config_settings, requirements=[])
150
151 def prepare_metadata_for_build_wheel(self, metadata_directory,
152 config_settings=None):
153 sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',
154 _to_str(metadata_directory)]
155 self.run_setup()
156
157 dist_info_directory = metadata_directory
158 while True:
159 dist_infos = [f for f in os.listdir(dist_info_directory)
160 if f.endswith('.dist-info')]
161
162 if (len(dist_infos) == 0 and
163 len(_get_immediate_subdirectories(dist_info_directory)) == 1):
164
165 dist_info_directory = os.path.join(
166 dist_info_directory, os.listdir(dist_info_directory)[0])
167 continue
168
169 assert len(dist_infos) == 1
170 break
171
172 # PEP 517 requires that the .dist-info directory be placed in the
173 # metadata_directory. To comply, we MUST copy the directory to the root
174 if dist_info_directory != metadata_directory:
175 shutil.move(
176 os.path.join(dist_info_directory, dist_infos[0]),
177 metadata_directory)
178 shutil.rmtree(dist_info_directory, ignore_errors=True)
179
180 return dist_infos[0]
181
182 def build_wheel(self, wheel_directory, config_settings=None,
183 metadata_directory=None):
184 config_settings = self._fix_config(config_settings)
185 wheel_directory = os.path.abspath(wheel_directory)
186
187 # Build the wheel in a temporary directory, then copy to the target
188 with TemporaryDirectory(dir=wheel_directory) as tmp_dist_dir:
189 sys.argv = (sys.argv[:1] +
190 ['bdist_wheel', '--dist-dir', tmp_dist_dir] +
191 config_settings["--global-option"])
192 self.run_setup()
193
194 wheel_basename = _file_with_extension(tmp_dist_dir, '.whl')
195 wheel_path = os.path.join(wheel_directory, wheel_basename)
196 if os.path.exists(wheel_path):
197 # os.rename will fail overwriting on non-unix env
198 os.remove(wheel_path)
199 os.rename(os.path.join(tmp_dist_dir, wheel_basename), wheel_path)
200
201 return wheel_basename
202
203 def build_sdist(self, sdist_directory, config_settings=None):
204 config_settings = self._fix_config(config_settings)
205 sdist_directory = os.path.abspath(sdist_directory)
206 sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \
207 config_settings["--global-option"] + \
208 ["--dist-dir", sdist_directory]
209 self.run_setup()
210
211 return _file_with_extension(sdist_directory, '.tar.gz')
212
213
214 class _BuildMetaLegacyBackend(_BuildMetaBackend):
215 """Compatibility backend for setuptools
216
217 This is a version of setuptools.build_meta that endeavors to maintain backwards
218 compatibility with pre-PEP 517 modes of invocation. It exists as a temporary
219 bridge between the old packaging mechanism and the new packaging mechanism,
220 and will eventually be removed.
221 """
222 def run_setup(self, setup_script='setup.py'):
223 # In order to maintain compatibility with scripts assuming that
224 # the setup.py script is in a directory on the PYTHONPATH, inject
225 # '' into sys.path. (pypa/setuptools#1642)
226 sys_path = list(sys.path) # Save the original path
227
228 script_dir = os.path.dirname(os.path.abspath(setup_script))
229 if script_dir not in sys.path:
230 sys.path.insert(0, script_dir)
231
232 try:
233 super(_BuildMetaLegacyBackend,
234 self).run_setup(setup_script=setup_script)
235 finally:
236 # While PEP 517 frontends should be calling each hook in a fresh
237 # subprocess according to the standard (and thus it should not be
238 # strictly necessary to restore the old sys.path), we'll restore
239 # the original path so that the path manipulation does not persist
240 # within the hook after run_setup is called.
241 sys.path[:] = sys_path
242
243 # The primary backend
244 _BACKEND = _BuildMetaBackend()
245
246 get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel
247 get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist
248 prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel
249 build_wheel = _BACKEND.build_wheel
250 build_sdist = _BACKEND.build_sdist
251
252
253 # The legacy backend
254 __legacy__ = _BuildMetaLegacyBackend()
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py
--- a/setuptools/build_meta.py
+++ b/setuptools/build_meta.py
@@ -38,6 +38,7 @@
from setuptools.py31compat import TemporaryDirectory
from pkg_resources import parse_requirements
+from pkg_resources.py31compat import makedirs
__all__ = ['get_requires_for_build_sdist',
'get_requires_for_build_wheel',
@@ -179,36 +180,38 @@
return dist_infos[0]
- def build_wheel(self, wheel_directory, config_settings=None,
- metadata_directory=None):
+ def _build_with_temp_dir(self, setup_command, result_extension,
+ result_directory, config_settings):
config_settings = self._fix_config(config_settings)
- wheel_directory = os.path.abspath(wheel_directory)
+ result_directory = os.path.abspath(result_directory)
- # Build the wheel in a temporary directory, then copy to the target
- with TemporaryDirectory(dir=wheel_directory) as tmp_dist_dir:
- sys.argv = (sys.argv[:1] +
- ['bdist_wheel', '--dist-dir', tmp_dist_dir] +
+ # Build in a temporary directory, then copy to the target.
+ makedirs(result_directory, exist_ok=True)
+ with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:
+ sys.argv = (sys.argv[:1] + setup_command +
+ ['--dist-dir', tmp_dist_dir] +
config_settings["--global-option"])
self.run_setup()
- wheel_basename = _file_with_extension(tmp_dist_dir, '.whl')
- wheel_path = os.path.join(wheel_directory, wheel_basename)
- if os.path.exists(wheel_path):
- # os.rename will fail overwriting on non-unix env
- os.remove(wheel_path)
- os.rename(os.path.join(tmp_dist_dir, wheel_basename), wheel_path)
+ result_basename = _file_with_extension(tmp_dist_dir, result_extension)
+ result_path = os.path.join(result_directory, result_basename)
+ if os.path.exists(result_path):
+ # os.rename will fail overwriting on non-Unix.
+ os.remove(result_path)
+ os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)
- return wheel_basename
+ return result_basename
- def build_sdist(self, sdist_directory, config_settings=None):
- config_settings = self._fix_config(config_settings)
- sdist_directory = os.path.abspath(sdist_directory)
- sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \
- config_settings["--global-option"] + \
- ["--dist-dir", sdist_directory]
- self.run_setup()
- return _file_with_extension(sdist_directory, '.tar.gz')
+ def build_wheel(self, wheel_directory, config_settings=None,
+ metadata_directory=None):
+ return self._build_with_temp_dir(['bdist_wheel'], '.whl',
+ wheel_directory, config_settings)
+
+ def build_sdist(self, sdist_directory, config_settings=None):
+ return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],
+ '.tar.gz', sdist_directory,
+ config_settings)
class _BuildMetaLegacyBackend(_BuildMetaBackend):
| {"golden_diff": "diff --git a/setuptools/build_meta.py b/setuptools/build_meta.py\n--- a/setuptools/build_meta.py\n+++ b/setuptools/build_meta.py\n@@ -38,6 +38,7 @@\n from setuptools.py31compat import TemporaryDirectory\n \n from pkg_resources import parse_requirements\n+from pkg_resources.py31compat import makedirs\n \n __all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n@@ -179,36 +180,38 @@\n \n return dist_infos[0]\n \n- def build_wheel(self, wheel_directory, config_settings=None,\n- metadata_directory=None):\n+ def _build_with_temp_dir(self, setup_command, result_extension,\n+ result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n- wheel_directory = os.path.abspath(wheel_directory)\n+ result_directory = os.path.abspath(result_directory)\n \n- # Build the wheel in a temporary directory, then copy to the target\n- with TemporaryDirectory(dir=wheel_directory) as tmp_dist_dir:\n- sys.argv = (sys.argv[:1] +\n- ['bdist_wheel', '--dist-dir', tmp_dist_dir] +\n+ # Build in a temporary directory, then copy to the target.\n+ makedirs(result_directory, exist_ok=True)\n+ with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n+ sys.argv = (sys.argv[:1] + setup_command +\n+ ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n \n- wheel_basename = _file_with_extension(tmp_dist_dir, '.whl')\n- wheel_path = os.path.join(wheel_directory, wheel_basename)\n- if os.path.exists(wheel_path):\n- # os.rename will fail overwriting on non-unix env\n- os.remove(wheel_path)\n- os.rename(os.path.join(tmp_dist_dir, wheel_basename), wheel_path)\n+ result_basename = _file_with_extension(tmp_dist_dir, result_extension)\n+ result_path = os.path.join(result_directory, result_basename)\n+ if os.path.exists(result_path):\n+ # os.rename will fail overwriting on non-Unix.\n+ os.remove(result_path)\n+ os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n \n- return wheel_basename\n+ return result_basename\n \n- def build_sdist(self, sdist_directory, config_settings=None):\n- config_settings = self._fix_config(config_settings)\n- sdist_directory = os.path.abspath(sdist_directory)\n- sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \\\n- config_settings[\"--global-option\"] + \\\n- [\"--dist-dir\", sdist_directory]\n- self.run_setup()\n \n- return _file_with_extension(sdist_directory, '.tar.gz')\n+ def build_wheel(self, wheel_directory, config_settings=None,\n+ metadata_directory=None):\n+ return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n+ wheel_directory, config_settings)\n+\n+ def build_sdist(self, sdist_directory, config_settings=None):\n+ return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n+ '.tar.gz', sdist_directory,\n+ config_settings)\n \n \n class _BuildMetaLegacyBackend(_BuildMetaBackend):\n", "issue": "`build_meta.build_sdist` should work if the destination directory already contains a tar.gz\nThe issue is similar to #1671, see #1745 for how the issue can be resolved.\n", "before_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n config_settings = self._fix_config(config_settings)\n wheel_directory = os.path.abspath(wheel_directory)\n\n # Build the wheel in a temporary directory, then copy to the target\n with TemporaryDirectory(dir=wheel_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] +\n ['bdist_wheel', '--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n\n wheel_basename = _file_with_extension(tmp_dist_dir, '.whl')\n wheel_path = os.path.join(wheel_directory, wheel_basename)\n if os.path.exists(wheel_path):\n # os.rename will fail overwriting on non-unix env\n os.remove(wheel_path)\n os.rename(os.path.join(tmp_dist_dir, wheel_basename), wheel_path)\n\n return wheel_basename\n\n def build_sdist(self, sdist_directory, config_settings=None):\n config_settings = self._fix_config(config_settings)\n sdist_directory = os.path.abspath(sdist_directory)\n sys.argv = sys.argv[:1] + ['sdist', '--formats', 'gztar'] + \\\n config_settings[\"--global-option\"] + \\\n [\"--dist-dir\", sdist_directory]\n self.run_setup()\n\n return _file_with_extension(sdist_directory, '.tar.gz')\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It exists as a temporary\n bridge between the old packaging mechanism and the new packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}], "after_files": [{"content": "\"\"\"A PEP 517 interface to setuptools\n\nPreviously, when a user or a command line tool (let's call it a \"frontend\")\nneeded to make a request of setuptools to take a certain action, for\nexample, generating a list of installation requirements, the frontend would\nwould call \"setup.py egg_info\" or \"setup.py bdist_wheel\" on the command line.\n\nPEP 517 defines a different method of interfacing with setuptools. Rather\nthan calling \"setup.py\" directly, the frontend should:\n\n 1. Set the current directory to the directory with a setup.py file\n 2. Import this module into a safe python interpreter (one in which\n setuptools can potentially set global variables or crash hard).\n 3. Call one of the functions defined in PEP 517.\n\nWhat each function does is defined in PEP 517. However, here is a \"casual\"\ndefinition of the functions (this definition should not be relied on for\nbug reports or API stability):\n\n - `build_wheel`: build a wheel in the folder and return the basename\n - `get_requires_for_build_wheel`: get the `setup_requires` to build\n - `prepare_metadata_for_build_wheel`: get the `install_requires`\n - `build_sdist`: build an sdist in the folder and return the basename\n - `get_requires_for_build_sdist`: get the `setup_requires` to build\n\nAgain, this is not a formal definition! Just a \"taste\" of the module.\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport tokenize\nimport shutil\nimport contextlib\n\nimport setuptools\nimport distutils\nfrom setuptools.py31compat import TemporaryDirectory\n\nfrom pkg_resources import parse_requirements\nfrom pkg_resources.py31compat import makedirs\n\n__all__ = ['get_requires_for_build_sdist',\n 'get_requires_for_build_wheel',\n 'prepare_metadata_for_build_wheel',\n 'build_wheel',\n 'build_sdist',\n '__legacy__',\n 'SetupRequirementsError']\n\nclass SetupRequirementsError(BaseException):\n def __init__(self, specifiers):\n self.specifiers = specifiers\n\n\nclass Distribution(setuptools.dist.Distribution):\n def fetch_build_eggs(self, specifiers):\n specifier_list = list(map(str, parse_requirements(specifiers)))\n\n raise SetupRequirementsError(specifier_list)\n\n @classmethod\n @contextlib.contextmanager\n def patch(cls):\n \"\"\"\n Replace\n distutils.dist.Distribution with this class\n for the duration of this context.\n \"\"\"\n orig = distutils.core.Distribution\n distutils.core.Distribution = cls\n try:\n yield\n finally:\n distutils.core.Distribution = orig\n\n\ndef _to_str(s):\n \"\"\"\n Convert a filename to a string (on Python 2, explicitly\n a byte string, not Unicode) as distutils checks for the\n exact type str.\n \"\"\"\n if sys.version_info[0] == 2 and not isinstance(s, str):\n # Assume it's Unicode, as that's what the PEP says\n # should be provided.\n return s.encode(sys.getfilesystemencoding())\n return s\n\n\ndef _get_immediate_subdirectories(a_dir):\n return [name for name in os.listdir(a_dir)\n if os.path.isdir(os.path.join(a_dir, name))]\n\n\ndef _file_with_extension(directory, extension):\n matching = (\n f for f in os.listdir(directory)\n if f.endswith(extension)\n )\n file, = matching\n return file\n\n\ndef _open_setup_script(setup_script):\n if not os.path.exists(setup_script):\n # Supply a default setup.py\n return io.StringIO(u\"from setuptools import setup; setup()\")\n\n return getattr(tokenize, 'open', open)(setup_script)\n\n\nclass _BuildMetaBackend(object):\n\n def _fix_config(self, config_settings):\n config_settings = config_settings or {}\n config_settings.setdefault('--global-option', [])\n return config_settings\n\n def _get_build_requires(self, config_settings, requirements):\n config_settings = self._fix_config(config_settings)\n\n sys.argv = sys.argv[:1] + ['egg_info'] + \\\n config_settings[\"--global-option\"]\n try:\n with Distribution.patch():\n self.run_setup()\n except SetupRequirementsError as e:\n requirements += e.specifiers\n\n return requirements\n\n def run_setup(self, setup_script='setup.py'):\n # Note that we can reuse our build directory between calls\n # Correctness comes first, then optimization later\n __file__ = setup_script\n __name__ = '__main__'\n\n with _open_setup_script(__file__) as f:\n code = f.read().replace(r'\\r\\n', r'\\n')\n\n exec(compile(code, __file__, 'exec'), locals())\n\n def get_requires_for_build_wheel(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=['wheel'])\n\n def get_requires_for_build_sdist(self, config_settings=None):\n config_settings = self._fix_config(config_settings)\n return self._get_build_requires(config_settings, requirements=[])\n\n def prepare_metadata_for_build_wheel(self, metadata_directory,\n config_settings=None):\n sys.argv = sys.argv[:1] + ['dist_info', '--egg-base',\n _to_str(metadata_directory)]\n self.run_setup()\n\n dist_info_directory = metadata_directory\n while True:\n dist_infos = [f for f in os.listdir(dist_info_directory)\n if f.endswith('.dist-info')]\n\n if (len(dist_infos) == 0 and\n len(_get_immediate_subdirectories(dist_info_directory)) == 1):\n\n dist_info_directory = os.path.join(\n dist_info_directory, os.listdir(dist_info_directory)[0])\n continue\n\n assert len(dist_infos) == 1\n break\n\n # PEP 517 requires that the .dist-info directory be placed in the\n # metadata_directory. To comply, we MUST copy the directory to the root\n if dist_info_directory != metadata_directory:\n shutil.move(\n os.path.join(dist_info_directory, dist_infos[0]),\n metadata_directory)\n shutil.rmtree(dist_info_directory, ignore_errors=True)\n\n return dist_infos[0]\n\n def _build_with_temp_dir(self, setup_command, result_extension,\n result_directory, config_settings):\n config_settings = self._fix_config(config_settings)\n result_directory = os.path.abspath(result_directory)\n\n # Build in a temporary directory, then copy to the target.\n makedirs(result_directory, exist_ok=True)\n with TemporaryDirectory(dir=result_directory) as tmp_dist_dir:\n sys.argv = (sys.argv[:1] + setup_command +\n ['--dist-dir', tmp_dist_dir] +\n config_settings[\"--global-option\"])\n self.run_setup()\n\n result_basename = _file_with_extension(tmp_dist_dir, result_extension)\n result_path = os.path.join(result_directory, result_basename)\n if os.path.exists(result_path):\n # os.rename will fail overwriting on non-Unix.\n os.remove(result_path)\n os.rename(os.path.join(tmp_dist_dir, result_basename), result_path)\n\n return result_basename\n\n\n def build_wheel(self, wheel_directory, config_settings=None,\n metadata_directory=None):\n return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n wheel_directory, config_settings)\n\n def build_sdist(self, sdist_directory, config_settings=None):\n return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],\n '.tar.gz', sdist_directory,\n config_settings)\n\n\nclass _BuildMetaLegacyBackend(_BuildMetaBackend):\n \"\"\"Compatibility backend for setuptools\n\n This is a version of setuptools.build_meta that endeavors to maintain backwards\n compatibility with pre-PEP 517 modes of invocation. It exists as a temporary\n bridge between the old packaging mechanism and the new packaging mechanism,\n and will eventually be removed.\n \"\"\"\n def run_setup(self, setup_script='setup.py'):\n # In order to maintain compatibility with scripts assuming that\n # the setup.py script is in a directory on the PYTHONPATH, inject\n # '' into sys.path. (pypa/setuptools#1642)\n sys_path = list(sys.path) # Save the original path\n\n script_dir = os.path.dirname(os.path.abspath(setup_script))\n if script_dir not in sys.path:\n sys.path.insert(0, script_dir)\n\n try:\n super(_BuildMetaLegacyBackend,\n self).run_setup(setup_script=setup_script)\n finally:\n # While PEP 517 frontends should be calling each hook in a fresh\n # subprocess according to the standard (and thus it should not be\n # strictly necessary to restore the old sys.path), we'll restore\n # the original path so that the path manipulation does not persist\n # within the hook after run_setup is called.\n sys.path[:] = sys_path\n\n# The primary backend\n_BACKEND = _BuildMetaBackend()\n\nget_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel\nget_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist\nprepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel\nbuild_wheel = _BACKEND.build_wheel\nbuild_sdist = _BACKEND.build_sdist\n\n\n# The legacy backend\n__legacy__ = _BuildMetaLegacyBackend()\n", "path": "setuptools/build_meta.py"}]} | 3,008 | 731 |
gh_patches_debug_2583 | rasdani/github-patches | git_diff | searxng__searxng-2081 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DuckDuckGo returning "access denied" errors
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
2023.01.06-b241015e
**How did you install SearXNG?**
searxng-docker
**What happened?**
DuckDuckGo started returning "access denied" error messages. Very similar to previous issue #1854
**How To Reproduce**
Enable DuckDuckGo and try to search anything.
**Expected behavior**
DDG results should return and no "Access Denied" error message should be displayed.
**Screenshots & Logs**
Error message in question:

- Exception: searx.exceptions.SearxEngineAccessDeniedException
- Parameter: HTTP error 403
- Filename: searx/search/processors/online.py:113
- Function: _send_http_request
- Code: response = req(params['url'], **request_args)
**Additional context**
It looks like it can be fixed by adding a HTTP `Referer` header to the request.
DuckDuckGo returning "access denied" errors
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
2023.01.06-b241015e
**How did you install SearXNG?**
searxng-docker
**What happened?**
DuckDuckGo started returning "access denied" error messages. Very similar to previous issue #1854
**How To Reproduce**
Enable DuckDuckGo and try to search anything.
**Expected behavior**
DDG results should return and no "Access Denied" error message should be displayed.
**Screenshots & Logs**
Error message in question:

- Exception: searx.exceptions.SearxEngineAccessDeniedException
- Parameter: HTTP error 403
- Filename: searx/search/processors/online.py:113
- Function: _send_http_request
- Code: response = req(params['url'], **request_args)
**Additional context**
It looks like it can be fixed by adding a HTTP `Referer` header to the request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/duckduckgo.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """DuckDuckGo Lite
4 """
5
6 from json import loads
7
8 from lxml.html import fromstring
9
10 from searx.utils import (
11 dict_subset,
12 eval_xpath,
13 eval_xpath_getindex,
14 extract_text,
15 match_language,
16 )
17 from searx.network import get
18
19 # about
20 about = {
21 "website": 'https://lite.duckduckgo.com/lite/',
22 "wikidata_id": 'Q12805',
23 "official_api_documentation": 'https://duckduckgo.com/api',
24 "use_official_api": False,
25 "require_api_key": False,
26 "results": 'HTML',
27 }
28
29 # engine dependent config
30 categories = ['general', 'web']
31 paging = True
32 supported_languages_url = 'https://duckduckgo.com/util/u588.js'
33 time_range_support = True
34 send_accept_language_header = True
35
36 language_aliases = {
37 'ar-SA': 'ar-XA',
38 'es-419': 'es-XL',
39 'ja': 'jp-JP',
40 'ko': 'kr-KR',
41 'sl-SI': 'sl-SL',
42 'zh-TW': 'tzh-TW',
43 'zh-HK': 'tzh-HK',
44 }
45
46 time_range_dict = {'day': 'd', 'week': 'w', 'month': 'm', 'year': 'y'}
47
48 # search-url
49 url = 'https://lite.duckduckgo.com/lite/'
50 url_ping = 'https://duckduckgo.com/t/sl_l'
51
52 # match query's language to a region code that duckduckgo will accept
53 def get_region_code(lang, lang_list=None):
54 if lang == 'all':
55 return None
56
57 lang_code = match_language(lang, lang_list or [], language_aliases, 'wt-WT')
58 lang_parts = lang_code.split('-')
59
60 # country code goes first
61 return lang_parts[1].lower() + '-' + lang_parts[0].lower()
62
63
64 def request(query, params):
65
66 params['url'] = url
67 params['method'] = 'POST'
68
69 params['data']['q'] = query
70
71 # The API is not documented, so we do some reverse engineering and emulate
72 # what https://lite.duckduckgo.com/lite/ does when you press "next Page"
73 # link again and again ..
74
75 params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'
76
77 # initial page does not have an offset
78 if params['pageno'] == 2:
79 # second page does have an offset of 30
80 offset = (params['pageno'] - 1) * 30
81 params['data']['s'] = offset
82 params['data']['dc'] = offset + 1
83
84 elif params['pageno'] > 2:
85 # third and following pages do have an offset of 30 + n*50
86 offset = 30 + (params['pageno'] - 2) * 50
87 params['data']['s'] = offset
88 params['data']['dc'] = offset + 1
89
90 # initial page does not have additional data in the input form
91 if params['pageno'] > 1:
92 # request the second page (and more pages) needs 'o' and 'api' arguments
93 params['data']['o'] = 'json'
94 params['data']['api'] = 'd.js'
95
96 # initial page does not have additional data in the input form
97 if params['pageno'] > 2:
98 # request the third page (and more pages) some more arguments
99 params['data']['nextParams'] = ''
100 params['data']['v'] = ''
101 params['data']['vqd'] = ''
102
103 region_code = get_region_code(params['language'], supported_languages)
104 if region_code:
105 params['data']['kl'] = region_code
106 params['cookies']['kl'] = region_code
107
108 params['data']['df'] = ''
109 if params['time_range'] in time_range_dict:
110 params['data']['df'] = time_range_dict[params['time_range']]
111 params['cookies']['df'] = time_range_dict[params['time_range']]
112
113 logger.debug("param data: %s", params['data'])
114 logger.debug("param cookies: %s", params['cookies'])
115 return params
116
117
118 # get response from search-request
119 def response(resp):
120
121 headers_ping = dict_subset(resp.request.headers, ['User-Agent', 'Accept-Encoding', 'Accept', 'Cookie'])
122 get(url_ping, headers=headers_ping)
123
124 if resp.status_code == 303:
125 return []
126
127 results = []
128 doc = fromstring(resp.text)
129
130 result_table = eval_xpath(doc, '//html/body/form/div[@class="filters"]/table')
131 if not len(result_table) >= 3:
132 # no more results
133 return []
134 result_table = result_table[2]
135
136 tr_rows = eval_xpath(result_table, './/tr')
137
138 # In the last <tr> is the form of the 'previous/next page' links
139 tr_rows = tr_rows[:-1]
140
141 len_tr_rows = len(tr_rows)
142 offset = 0
143
144 while len_tr_rows >= offset + 4:
145
146 # assemble table rows we need to scrap
147 tr_title = tr_rows[offset]
148 tr_content = tr_rows[offset + 1]
149 offset += 4
150
151 # ignore sponsored Adds <tr class="result-sponsored">
152 if tr_content.get('class') == 'result-sponsored':
153 continue
154
155 a_tag = eval_xpath_getindex(tr_title, './/td//a[@class="result-link"]', 0, None)
156 if a_tag is None:
157 continue
158
159 td_content = eval_xpath_getindex(tr_content, './/td[@class="result-snippet"]', 0, None)
160 if td_content is None:
161 continue
162
163 results.append(
164 {
165 'title': a_tag.text_content(),
166 'content': extract_text(td_content),
167 'url': a_tag.get('href'),
168 }
169 )
170
171 return results
172
173
174 # get supported languages from their site
175 def _fetch_supported_languages(resp):
176
177 # response is a js file with regions as an embedded object
178 response_page = resp.text
179 response_page = response_page[response_page.find('regions:{') + 8 :]
180 response_page = response_page[: response_page.find('}') + 1]
181
182 regions_json = loads(response_page)
183 supported_languages = map((lambda x: x[3:] + '-' + x[:2].upper()), regions_json.keys())
184
185 return list(supported_languages)
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/duckduckgo.py b/searx/engines/duckduckgo.py
--- a/searx/engines/duckduckgo.py
+++ b/searx/engines/duckduckgo.py
@@ -73,6 +73,7 @@
# link again and again ..
params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'
+ params['headers']['Referer'] = 'https://lite.duckduckgo.com/'
# initial page does not have an offset
if params['pageno'] == 2:
| {"golden_diff": "diff --git a/searx/engines/duckduckgo.py b/searx/engines/duckduckgo.py\n--- a/searx/engines/duckduckgo.py\n+++ b/searx/engines/duckduckgo.py\n@@ -73,6 +73,7 @@\n # link again and again ..\n \n params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'\n+ params['headers']['Referer'] = 'https://lite.duckduckgo.com/'\n \n # initial page does not have an offset\n if params['pageno'] == 2:\n", "issue": "DuckDuckGo returning \"access denied\" errors\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n2023.01.06-b241015e\r\n\r\n**How did you install SearXNG?**\r\nsearxng-docker\r\n\r\n**What happened?**\r\nDuckDuckGo started returning \"access denied\" error messages. Very similar to previous issue #1854 \r\n\r\n**How To Reproduce**\r\nEnable DuckDuckGo and try to search anything.\r\n\r\n**Expected behavior**\r\nDDG results should return and no \"Access Denied\" error message should be displayed.\r\n\r\n**Screenshots & Logs**\r\nError message in question:\r\n\r\n\r\n- Exception: searx.exceptions.SearxEngineAccessDeniedException\r\n- Parameter: HTTP error 403\r\n- Filename: searx/search/processors/online.py:113\r\n- Function: _send_http_request\r\n- Code: response = req(params['url'], **request_args)\r\n\r\n**Additional context**\r\n\r\nIt looks like it can be fixed by adding a HTTP `Referer` header to the request.\nDuckDuckGo returning \"access denied\" errors\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n2023.01.06-b241015e\r\n\r\n**How did you install SearXNG?**\r\nsearxng-docker\r\n\r\n**What happened?**\r\nDuckDuckGo started returning \"access denied\" error messages. Very similar to previous issue #1854 \r\n\r\n**How To Reproduce**\r\nEnable DuckDuckGo and try to search anything.\r\n\r\n**Expected behavior**\r\nDDG results should return and no \"Access Denied\" error message should be displayed.\r\n\r\n**Screenshots & Logs**\r\nError message in question:\r\n\r\n\r\n- Exception: searx.exceptions.SearxEngineAccessDeniedException\r\n- Parameter: HTTP error 403\r\n- Filename: searx/search/processors/online.py:113\r\n- Function: _send_http_request\r\n- Code: response = req(params['url'], **request_args)\r\n\r\n**Additional context**\r\n\r\nIt looks like it can be fixed by adding a HTTP `Referer` header to the request.\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"DuckDuckGo Lite\n\"\"\"\n\nfrom json import loads\n\nfrom lxml.html import fromstring\n\nfrom searx.utils import (\n dict_subset,\n eval_xpath,\n eval_xpath_getindex,\n extract_text,\n match_language,\n)\nfrom searx.network import get\n\n# about\nabout = {\n \"website\": 'https://lite.duckduckgo.com/lite/',\n \"wikidata_id\": 'Q12805',\n \"official_api_documentation\": 'https://duckduckgo.com/api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general', 'web']\npaging = True\nsupported_languages_url = 'https://duckduckgo.com/util/u588.js'\ntime_range_support = True\nsend_accept_language_header = True\n\nlanguage_aliases = {\n 'ar-SA': 'ar-XA',\n 'es-419': 'es-XL',\n 'ja': 'jp-JP',\n 'ko': 'kr-KR',\n 'sl-SI': 'sl-SL',\n 'zh-TW': 'tzh-TW',\n 'zh-HK': 'tzh-HK',\n}\n\ntime_range_dict = {'day': 'd', 'week': 'w', 'month': 'm', 'year': 'y'}\n\n# search-url\nurl = 'https://lite.duckduckgo.com/lite/'\nurl_ping = 'https://duckduckgo.com/t/sl_l'\n\n# match query's language to a region code that duckduckgo will accept\ndef get_region_code(lang, lang_list=None):\n if lang == 'all':\n return None\n\n lang_code = match_language(lang, lang_list or [], language_aliases, 'wt-WT')\n lang_parts = lang_code.split('-')\n\n # country code goes first\n return lang_parts[1].lower() + '-' + lang_parts[0].lower()\n\n\ndef request(query, params):\n\n params['url'] = url\n params['method'] = 'POST'\n\n params['data']['q'] = query\n\n # The API is not documented, so we do some reverse engineering and emulate\n # what https://lite.duckduckgo.com/lite/ does when you press \"next Page\"\n # link again and again ..\n\n params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'\n\n # initial page does not have an offset\n if params['pageno'] == 2:\n # second page does have an offset of 30\n offset = (params['pageno'] - 1) * 30\n params['data']['s'] = offset\n params['data']['dc'] = offset + 1\n\n elif params['pageno'] > 2:\n # third and following pages do have an offset of 30 + n*50\n offset = 30 + (params['pageno'] - 2) * 50\n params['data']['s'] = offset\n params['data']['dc'] = offset + 1\n\n # initial page does not have additional data in the input form\n if params['pageno'] > 1:\n # request the second page (and more pages) needs 'o' and 'api' arguments\n params['data']['o'] = 'json'\n params['data']['api'] = 'd.js'\n\n # initial page does not have additional data in the input form\n if params['pageno'] > 2:\n # request the third page (and more pages) some more arguments\n params['data']['nextParams'] = ''\n params['data']['v'] = ''\n params['data']['vqd'] = ''\n\n region_code = get_region_code(params['language'], supported_languages)\n if region_code:\n params['data']['kl'] = region_code\n params['cookies']['kl'] = region_code\n\n params['data']['df'] = ''\n if params['time_range'] in time_range_dict:\n params['data']['df'] = time_range_dict[params['time_range']]\n params['cookies']['df'] = time_range_dict[params['time_range']]\n\n logger.debug(\"param data: %s\", params['data'])\n logger.debug(\"param cookies: %s\", params['cookies'])\n return params\n\n\n# get response from search-request\ndef response(resp):\n\n headers_ping = dict_subset(resp.request.headers, ['User-Agent', 'Accept-Encoding', 'Accept', 'Cookie'])\n get(url_ping, headers=headers_ping)\n\n if resp.status_code == 303:\n return []\n\n results = []\n doc = fromstring(resp.text)\n\n result_table = eval_xpath(doc, '//html/body/form/div[@class=\"filters\"]/table')\n if not len(result_table) >= 3:\n # no more results\n return []\n result_table = result_table[2]\n\n tr_rows = eval_xpath(result_table, './/tr')\n\n # In the last <tr> is the form of the 'previous/next page' links\n tr_rows = tr_rows[:-1]\n\n len_tr_rows = len(tr_rows)\n offset = 0\n\n while len_tr_rows >= offset + 4:\n\n # assemble table rows we need to scrap\n tr_title = tr_rows[offset]\n tr_content = tr_rows[offset + 1]\n offset += 4\n\n # ignore sponsored Adds <tr class=\"result-sponsored\">\n if tr_content.get('class') == 'result-sponsored':\n continue\n\n a_tag = eval_xpath_getindex(tr_title, './/td//a[@class=\"result-link\"]', 0, None)\n if a_tag is None:\n continue\n\n td_content = eval_xpath_getindex(tr_content, './/td[@class=\"result-snippet\"]', 0, None)\n if td_content is None:\n continue\n\n results.append(\n {\n 'title': a_tag.text_content(),\n 'content': extract_text(td_content),\n 'url': a_tag.get('href'),\n }\n )\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n\n # response is a js file with regions as an embedded object\n response_page = resp.text\n response_page = response_page[response_page.find('regions:{') + 8 :]\n response_page = response_page[: response_page.find('}') + 1]\n\n regions_json = loads(response_page)\n supported_languages = map((lambda x: x[3:] + '-' + x[:2].upper()), regions_json.keys())\n\n return list(supported_languages)\n", "path": "searx/engines/duckduckgo.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"DuckDuckGo Lite\n\"\"\"\n\nfrom json import loads\n\nfrom lxml.html import fromstring\n\nfrom searx.utils import (\n dict_subset,\n eval_xpath,\n eval_xpath_getindex,\n extract_text,\n match_language,\n)\nfrom searx.network import get\n\n# about\nabout = {\n \"website\": 'https://lite.duckduckgo.com/lite/',\n \"wikidata_id\": 'Q12805',\n \"official_api_documentation\": 'https://duckduckgo.com/api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general', 'web']\npaging = True\nsupported_languages_url = 'https://duckduckgo.com/util/u588.js'\ntime_range_support = True\nsend_accept_language_header = True\n\nlanguage_aliases = {\n 'ar-SA': 'ar-XA',\n 'es-419': 'es-XL',\n 'ja': 'jp-JP',\n 'ko': 'kr-KR',\n 'sl-SI': 'sl-SL',\n 'zh-TW': 'tzh-TW',\n 'zh-HK': 'tzh-HK',\n}\n\ntime_range_dict = {'day': 'd', 'week': 'w', 'month': 'm', 'year': 'y'}\n\n# search-url\nurl = 'https://lite.duckduckgo.com/lite/'\nurl_ping = 'https://duckduckgo.com/t/sl_l'\n\n# match query's language to a region code that duckduckgo will accept\ndef get_region_code(lang, lang_list=None):\n if lang == 'all':\n return None\n\n lang_code = match_language(lang, lang_list or [], language_aliases, 'wt-WT')\n lang_parts = lang_code.split('-')\n\n # country code goes first\n return lang_parts[1].lower() + '-' + lang_parts[0].lower()\n\n\ndef request(query, params):\n\n params['url'] = url\n params['method'] = 'POST'\n\n params['data']['q'] = query\n\n # The API is not documented, so we do some reverse engineering and emulate\n # what https://lite.duckduckgo.com/lite/ does when you press \"next Page\"\n # link again and again ..\n\n params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'\n params['headers']['Referer'] = 'https://lite.duckduckgo.com/'\n\n # initial page does not have an offset\n if params['pageno'] == 2:\n # second page does have an offset of 30\n offset = (params['pageno'] - 1) * 30\n params['data']['s'] = offset\n params['data']['dc'] = offset + 1\n\n elif params['pageno'] > 2:\n # third and following pages do have an offset of 30 + n*50\n offset = 30 + (params['pageno'] - 2) * 50\n params['data']['s'] = offset\n params['data']['dc'] = offset + 1\n\n # initial page does not have additional data in the input form\n if params['pageno'] > 1:\n # request the second page (and more pages) needs 'o' and 'api' arguments\n params['data']['o'] = 'json'\n params['data']['api'] = 'd.js'\n\n # initial page does not have additional data in the input form\n if params['pageno'] > 2:\n # request the third page (and more pages) some more arguments\n params['data']['nextParams'] = ''\n params['data']['v'] = ''\n params['data']['vqd'] = ''\n\n region_code = get_region_code(params['language'], supported_languages)\n if region_code:\n params['data']['kl'] = region_code\n params['cookies']['kl'] = region_code\n\n params['data']['df'] = ''\n if params['time_range'] in time_range_dict:\n params['data']['df'] = time_range_dict[params['time_range']]\n params['cookies']['df'] = time_range_dict[params['time_range']]\n\n logger.debug(\"param data: %s\", params['data'])\n logger.debug(\"param cookies: %s\", params['cookies'])\n return params\n\n\n# get response from search-request\ndef response(resp):\n\n headers_ping = dict_subset(resp.request.headers, ['User-Agent', 'Accept-Encoding', 'Accept', 'Cookie'])\n get(url_ping, headers=headers_ping)\n\n if resp.status_code == 303:\n return []\n\n results = []\n doc = fromstring(resp.text)\n\n result_table = eval_xpath(doc, '//html/body/form/div[@class=\"filters\"]/table')\n if not len(result_table) >= 3:\n # no more results\n return []\n result_table = result_table[2]\n\n tr_rows = eval_xpath(result_table, './/tr')\n\n # In the last <tr> is the form of the 'previous/next page' links\n tr_rows = tr_rows[:-1]\n\n len_tr_rows = len(tr_rows)\n offset = 0\n\n while len_tr_rows >= offset + 4:\n\n # assemble table rows we need to scrap\n tr_title = tr_rows[offset]\n tr_content = tr_rows[offset + 1]\n offset += 4\n\n # ignore sponsored Adds <tr class=\"result-sponsored\">\n if tr_content.get('class') == 'result-sponsored':\n continue\n\n a_tag = eval_xpath_getindex(tr_title, './/td//a[@class=\"result-link\"]', 0, None)\n if a_tag is None:\n continue\n\n td_content = eval_xpath_getindex(tr_content, './/td[@class=\"result-snippet\"]', 0, None)\n if td_content is None:\n continue\n\n results.append(\n {\n 'title': a_tag.text_content(),\n 'content': extract_text(td_content),\n 'url': a_tag.get('href'),\n }\n )\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n\n # response is a js file with regions as an embedded object\n response_page = resp.text\n response_page = response_page[response_page.find('regions:{') + 8 :]\n response_page = response_page[: response_page.find('}') + 1]\n\n regions_json = loads(response_page)\n supported_languages = map((lambda x: x[3:] + '-' + x[:2].upper()), regions_json.keys())\n\n return list(supported_languages)\n", "path": "searx/engines/duckduckgo.py"}]} | 2,844 | 134 |
gh_patches_debug_3049 | rasdani/github-patches | git_diff | translate__pootle-5179 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Severe performance degradation of sync_stores
Earlier today we updated production translation server (merged 'Raw font' PR branch with master and switched to master). So this release includes recent changes related to optimizations. Immediately after that our sync cycle time increased from typical 18-20 minutes to 1.5 hours. I had to write a tool to analyze and compare our logs from past sync cycles and extract timing information. Here's the output:

Some comments on the screenshot. It compares our main steps of sync cycle between four logs (vertical columns). The first log is from a morning run, some time before the release. The second one is soon after the release, and the last two are most recent ones. As one might see, the `pull-ts` step is the culprit. It went from mere 3 minutes up to more than an hour. During this step all we do is run `manage.py sync_stores --skip-missing --project=<project_id>` and wait for it to finish.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_store/syncer.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import logging
10 import os
11 from collections import namedtuple
12
13 from translate.storage.factory import getclass
14
15 from django.utils.functional import cached_property
16
17 from pootle.core.delegate import format_classes
18 from pootle.core.log import log
19 from pootle.core.url_helpers import split_pootle_path
20
21 from .models import Unit
22 from .util import get_change_str
23
24
25 class UnitSyncer(object):
26
27 def __init__(self, unit):
28 self.unit = unit
29
30 @property
31 def context(self):
32 return self.unit.getcontext()
33
34 @property
35 def developer_notes(self):
36 return self.unit.getnotes(origin="developer")
37
38 @property
39 def isfuzzy(self):
40 return self.unit.isfuzzy()
41
42 @property
43 def isobsolete(self):
44 return self.unit.isobsolete()
45
46 @property
47 def locations(self):
48 return self.unit.getlocations()
49
50 @property
51 def source(self):
52 return self.unit.source
53
54 @property
55 def target(self):
56 return self.unit.target
57
58 @property
59 def translator_notes(self):
60 return self.unit.getnotes(origin="translator")
61
62 @property
63 def unitid(self):
64 return self.unit.getid()
65
66 @property
67 def unit_class(self):
68 return self.unit.store.syncer.unit_class
69
70 def convert(self, unitclass=None):
71 newunit = self.create_unit(
72 unitclass or self.unit_class)
73 self.set_target(newunit)
74 self.set_fuzzy(newunit)
75 self.set_locations(newunit)
76 self.set_developer_notes(newunit)
77 self.set_translator_notes(newunit)
78 self.set_unitid(newunit)
79 self.set_context(newunit)
80 self.set_obsolete(newunit)
81 return newunit
82
83 def create_unit(self, unitclass):
84 return unitclass(self.source)
85
86 def set_context(self, newunit):
87 newunit.setcontext(self.context)
88
89 def set_developer_notes(self, newunit):
90 notes = self.developer_notes
91 if notes:
92 newunit.addnote(notes, origin="developer")
93
94 def set_fuzzy(self, newunit):
95 newunit.markfuzzy(self.isfuzzy)
96
97 def set_locations(self, newunit):
98 locations = self.locations
99 if locations:
100 newunit.addlocations(locations)
101
102 def set_obsolete(self, newunit):
103 if self.isobsolete:
104 newunit.makeobsolete()
105
106 def set_target(self, newunit):
107 newunit.target = self.target
108
109 def set_translator_notes(self, newunit):
110 notes = self.translator_notes
111 if notes:
112 newunit.addnote(notes, origin="translator")
113
114 def set_unitid(self, newunit):
115 newunit.setid(self.unitid)
116
117
118 class StoreSyncer(object):
119 unit_sync_class = UnitSyncer
120
121 def __init__(self, store):
122 self.store = store
123
124 @cached_property
125 def disk_store(self):
126 return self.store.file.store
127
128 @property
129 def translation_project(self):
130 return self.store.translation_project
131
132 @property
133 def language(self):
134 return self.translation_project.language
135
136 @property
137 def project(self):
138 return self.translation_project.project
139
140 @property
141 def source_language(self):
142 return self.project.source_language
143
144 @property
145 def store_file_path(self):
146 return os.path.join(
147 self.translation_project.abs_real_path,
148 *split_pootle_path(self.store.pootle_path)[2:])
149
150 @property
151 def relative_file_path(self):
152 path_parts = split_pootle_path(self.store.pootle_path)
153 path_prefix = [path_parts[1]]
154 if self.project.get_treestyle() != "gnu":
155 path_prefix.append(path_parts[0])
156 return os.path.join(*(path_prefix + list(path_parts[2:])))
157
158 @property
159 def unit_class(self):
160 return self.file_class.UnitClass
161
162 @cached_property
163 def file_class(self):
164 # get a plugin adapted file_class
165 fileclass = format_classes.gather().get(
166 str(self.store.filetype.extension))
167 if fileclass:
168 return fileclass
169 if self.store.is_template:
170 # namedtuple is equiv here of object() with name attr
171 return self._getclass(
172 namedtuple("instance", "name")(
173 name=".".join(
174 [os.path.splitext(self.store.name)[0],
175 str(self.store.filetype.extension)])))
176 return self._getclass(self.store)
177
178 def convert(self, fileclass=None):
179 """export to fileclass"""
180 fileclass = fileclass or self.file_class
181 logging.debug(
182 u"Converting %s to %s",
183 self.store.pootle_path,
184 fileclass)
185 output = fileclass()
186 output.settargetlanguage(self.language.code)
187 # FIXME: we should add some headers
188 for unit in self.store.units.iterator():
189 output.addunit(
190 self.unit_sync_class(unit).convert(output.UnitClass))
191 return output
192
193 def _getclass(self, obj):
194 try:
195 return getclass(obj)
196 except ValueError:
197 raise ValueError(
198 "Unable to find conversion class for Store '%s'"
199 % self.store.name)
200
201 def get_new_units(self, old_ids, new_ids):
202 return self.store.findid_bulk(
203 [self.dbid_index.get(uid)
204 for uid
205 in new_ids - old_ids])
206
207 def get_units_to_obsolete(self, old_ids, new_ids):
208 for uid in old_ids - new_ids:
209 unit = self.disk_store.findid(uid)
210 if unit and not unit.isobsolete():
211 yield unit
212
213 def obsolete_unit(self, unit, conservative):
214 deleted = not unit.istranslated()
215 obsoleted = (
216 not deleted
217 and not conservative)
218 if obsoleted:
219 unit.makeobsolete()
220 deleted = not unit.isobsolete()
221 if deleted:
222 del unit
223 return obsoleted, deleted
224
225 def update_structure(self, obsolete_units, new_units, conservative):
226 obsolete = 0
227 deleted = 0
228 added = 0
229 for unit in obsolete_units:
230 _obsolete, _deleted = self.obsolete_unit(unit, conservative)
231 if _obsolete:
232 obsolete += 1
233 if _deleted:
234 deleted += 1
235 for unit in new_units:
236 newunit = unit.convert(self.disk_store.UnitClass)
237 self.disk_store.addunit(newunit)
238 added += 1
239 return obsolete, deleted, added
240
241 def create_store_file(self, last_revision, user):
242 logging.debug(u"Creating file %s", self.store.pootle_path)
243 store = self.convert()
244 if not os.path.exists(os.path.dirname(self.store_file_path)):
245 os.makedirs(os.path.dirname(self.store_file_path))
246 self.store.file = self.relative_file_path
247 store.savefile(self.store_file_path)
248 log(u"Created file for %s [revision: %d]" %
249 (self.store.pootle_path, last_revision))
250 self.update_store_header(user=user)
251 self.store.file.savestore()
252 self.store.file_mtime = self.store.get_file_mtime()
253 self.store.last_sync_revision = last_revision
254 self.store.save()
255
256 def update_newer(self, last_revision):
257 return (
258 not self.store.file.exists()
259 or (last_revision >= self.store.last_sync_revision))
260
261 @cached_property
262 def dbid_index(self):
263 """build a quick mapping index between unit ids and database ids"""
264 return dict(
265 self.store.unit_set.live().values_list('unitid', 'id'))
266
267 def sync(self, update_structure=False, conservative=True,
268 user=None, only_newer=True):
269 last_revision = self.store.get_max_unit_revision()
270
271 # TODO only_newer -> not force
272 if only_newer and not self.update_newer(last_revision):
273 logging.info(
274 u"[sync] No updates for %s after [revision: %d]",
275 self.store.pootle_path, self.store.last_sync_revision)
276 return
277
278 if not self.store.file.exists():
279 self.create_store_file(last_revision, user)
280 return
281
282 if conservative and self.store.is_template:
283 return
284
285 file_changed, changes = self.sync_store(
286 last_revision,
287 update_structure,
288 conservative)
289 self.save_store(
290 last_revision,
291 user,
292 changes,
293 (file_changed or not conservative))
294
295 def sync_store(self, last_revision, update_structure, conservative):
296 logging.info(u"Syncing %s", self.store.pootle_path)
297 old_ids = set(self.disk_store.getids())
298 new_ids = set(self.dbid_index.keys())
299 file_changed = False
300 changes = {}
301 if update_structure:
302 obsolete_units = self.get_units_to_obsolete(old_ids, new_ids)
303 new_units = self.get_new_units(old_ids, new_ids)
304 if obsolete_units or new_units:
305 file_changed = True
306 (changes['obsolete'],
307 changes['deleted'],
308 changes['added']) = self.update_structure(
309 obsolete_units,
310 new_units,
311 conservative=conservative)
312 changes["updated"] = self.sync_units(
313 self.get_common_units(
314 set(self.dbid_index.get(uid)
315 for uid
316 in old_ids & new_ids),
317 last_revision,
318 conservative))
319 return bool(file_changed or any(changes.values())), changes
320
321 def save_store(self, last_revision, user, changes, updated):
322 # TODO conservative -> not overwrite
323 if updated:
324 self.update_store_header(user=user)
325 self.store.file.savestore()
326 self.store.file_mtime = self.store.get_file_mtime()
327 log(u"[sync] File saved; %s units in %s [revision: %d]" %
328 (get_change_str(changes),
329 self.store.pootle_path,
330 last_revision))
331 else:
332 logging.info(
333 u"[sync] nothing changed in %s [revision: %d]",
334 self.store.pootle_path,
335 last_revision)
336 self.store.last_sync_revision = last_revision
337 self.store.save()
338
339 def get_revision_filters(self, last_revision):
340 # Get units modified after last sync and before this sync started
341 filter_by = {
342 'revision__lte': last_revision,
343 'store': self.store}
344 # Sync all units if first sync
345 if self.store.last_sync_revision is not None:
346 filter_by.update({'revision__gt': self.store.last_sync_revision})
347 return filter_by
348
349 def get_modified_units(self, last_revision):
350 return set(
351 Unit.objects.filter(**self.get_revision_filters(last_revision))
352 .values_list('id', flat=True).distinct()
353 if last_revision > self.store.last_sync_revision
354 else [])
355
356 def get_common_units(self, common_dbids, last_revision, conservative):
357 if conservative:
358 # Sync only modified units
359 common_dbids &= self.get_modified_units(last_revision)
360 return self.store.findid_bulk(list(common_dbids))
361
362 def sync_units(self, units):
363 updated = 0
364 for unit in units:
365 match = self.disk_store.findid(unit.getid())
366 if match is not None:
367 changed = unit.sync(match)
368 if changed:
369 updated += 1
370 return updated
371
372 def update_store_header(self, **kwargs_):
373 self.disk_store.settargetlanguage(self.language.code)
374 self.disk_store.setsourcelanguage(self.source_language.code)
375
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_store/syncer.py b/pootle/apps/pootle_store/syncer.py
--- a/pootle/apps/pootle_store/syncer.py
+++ b/pootle/apps/pootle_store/syncer.py
@@ -256,7 +256,8 @@
def update_newer(self, last_revision):
return (
not self.store.file.exists()
- or (last_revision >= self.store.last_sync_revision))
+ or last_revision > self.store.last_sync_revision
+ )
@cached_property
def dbid_index(self):
| {"golden_diff": "diff --git a/pootle/apps/pootle_store/syncer.py b/pootle/apps/pootle_store/syncer.py\n--- a/pootle/apps/pootle_store/syncer.py\n+++ b/pootle/apps/pootle_store/syncer.py\n@@ -256,7 +256,8 @@\n def update_newer(self, last_revision):\n return (\n not self.store.file.exists()\n- or (last_revision >= self.store.last_sync_revision))\n+ or last_revision > self.store.last_sync_revision\n+ )\n \n @cached_property\n def dbid_index(self):\n", "issue": "Severe performance degradation of sync_stores\nEarlier today we updated production translation server (merged 'Raw font' PR branch with master and switched to master). So this release includes recent changes related to optimizations. Immediately after that our sync cycle time increased from typical 18-20 minutes to 1.5 hours. I had to write a tool to analyze and compare our logs from past sync cycles and extract timing information. Here's the output:\n\n\n\nSome comments on the screenshot. It compares our main steps of sync cycle between four logs (vertical columns). The first log is from a morning run, some time before the release. The second one is soon after the release, and the last two are most recent ones. As one might see, the `pull-ts` step is the culprit. It went from mere 3 minutes up to more than an hour. During this step all we do is run `manage.py sync_stores --skip-missing --project=<project_id>` and wait for it to finish.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nfrom collections import namedtuple\n\nfrom translate.storage.factory import getclass\n\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.delegate import format_classes\nfrom pootle.core.log import log\nfrom pootle.core.url_helpers import split_pootle_path\n\nfrom .models import Unit\nfrom .util import get_change_str\n\n\nclass UnitSyncer(object):\n\n def __init__(self, unit):\n self.unit = unit\n\n @property\n def context(self):\n return self.unit.getcontext()\n\n @property\n def developer_notes(self):\n return self.unit.getnotes(origin=\"developer\")\n\n @property\n def isfuzzy(self):\n return self.unit.isfuzzy()\n\n @property\n def isobsolete(self):\n return self.unit.isobsolete()\n\n @property\n def locations(self):\n return self.unit.getlocations()\n\n @property\n def source(self):\n return self.unit.source\n\n @property\n def target(self):\n return self.unit.target\n\n @property\n def translator_notes(self):\n return self.unit.getnotes(origin=\"translator\")\n\n @property\n def unitid(self):\n return self.unit.getid()\n\n @property\n def unit_class(self):\n return self.unit.store.syncer.unit_class\n\n def convert(self, unitclass=None):\n newunit = self.create_unit(\n unitclass or self.unit_class)\n self.set_target(newunit)\n self.set_fuzzy(newunit)\n self.set_locations(newunit)\n self.set_developer_notes(newunit)\n self.set_translator_notes(newunit)\n self.set_unitid(newunit)\n self.set_context(newunit)\n self.set_obsolete(newunit)\n return newunit\n\n def create_unit(self, unitclass):\n return unitclass(self.source)\n\n def set_context(self, newunit):\n newunit.setcontext(self.context)\n\n def set_developer_notes(self, newunit):\n notes = self.developer_notes\n if notes:\n newunit.addnote(notes, origin=\"developer\")\n\n def set_fuzzy(self, newunit):\n newunit.markfuzzy(self.isfuzzy)\n\n def set_locations(self, newunit):\n locations = self.locations\n if locations:\n newunit.addlocations(locations)\n\n def set_obsolete(self, newunit):\n if self.isobsolete:\n newunit.makeobsolete()\n\n def set_target(self, newunit):\n newunit.target = self.target\n\n def set_translator_notes(self, newunit):\n notes = self.translator_notes\n if notes:\n newunit.addnote(notes, origin=\"translator\")\n\n def set_unitid(self, newunit):\n newunit.setid(self.unitid)\n\n\nclass StoreSyncer(object):\n unit_sync_class = UnitSyncer\n\n def __init__(self, store):\n self.store = store\n\n @cached_property\n def disk_store(self):\n return self.store.file.store\n\n @property\n def translation_project(self):\n return self.store.translation_project\n\n @property\n def language(self):\n return self.translation_project.language\n\n @property\n def project(self):\n return self.translation_project.project\n\n @property\n def source_language(self):\n return self.project.source_language\n\n @property\n def store_file_path(self):\n return os.path.join(\n self.translation_project.abs_real_path,\n *split_pootle_path(self.store.pootle_path)[2:])\n\n @property\n def relative_file_path(self):\n path_parts = split_pootle_path(self.store.pootle_path)\n path_prefix = [path_parts[1]]\n if self.project.get_treestyle() != \"gnu\":\n path_prefix.append(path_parts[0])\n return os.path.join(*(path_prefix + list(path_parts[2:])))\n\n @property\n def unit_class(self):\n return self.file_class.UnitClass\n\n @cached_property\n def file_class(self):\n # get a plugin adapted file_class\n fileclass = format_classes.gather().get(\n str(self.store.filetype.extension))\n if fileclass:\n return fileclass\n if self.store.is_template:\n # namedtuple is equiv here of object() with name attr\n return self._getclass(\n namedtuple(\"instance\", \"name\")(\n name=\".\".join(\n [os.path.splitext(self.store.name)[0],\n str(self.store.filetype.extension)])))\n return self._getclass(self.store)\n\n def convert(self, fileclass=None):\n \"\"\"export to fileclass\"\"\"\n fileclass = fileclass or self.file_class\n logging.debug(\n u\"Converting %s to %s\",\n self.store.pootle_path,\n fileclass)\n output = fileclass()\n output.settargetlanguage(self.language.code)\n # FIXME: we should add some headers\n for unit in self.store.units.iterator():\n output.addunit(\n self.unit_sync_class(unit).convert(output.UnitClass))\n return output\n\n def _getclass(self, obj):\n try:\n return getclass(obj)\n except ValueError:\n raise ValueError(\n \"Unable to find conversion class for Store '%s'\"\n % self.store.name)\n\n def get_new_units(self, old_ids, new_ids):\n return self.store.findid_bulk(\n [self.dbid_index.get(uid)\n for uid\n in new_ids - old_ids])\n\n def get_units_to_obsolete(self, old_ids, new_ids):\n for uid in old_ids - new_ids:\n unit = self.disk_store.findid(uid)\n if unit and not unit.isobsolete():\n yield unit\n\n def obsolete_unit(self, unit, conservative):\n deleted = not unit.istranslated()\n obsoleted = (\n not deleted\n and not conservative)\n if obsoleted:\n unit.makeobsolete()\n deleted = not unit.isobsolete()\n if deleted:\n del unit\n return obsoleted, deleted\n\n def update_structure(self, obsolete_units, new_units, conservative):\n obsolete = 0\n deleted = 0\n added = 0\n for unit in obsolete_units:\n _obsolete, _deleted = self.obsolete_unit(unit, conservative)\n if _obsolete:\n obsolete += 1\n if _deleted:\n deleted += 1\n for unit in new_units:\n newunit = unit.convert(self.disk_store.UnitClass)\n self.disk_store.addunit(newunit)\n added += 1\n return obsolete, deleted, added\n\n def create_store_file(self, last_revision, user):\n logging.debug(u\"Creating file %s\", self.store.pootle_path)\n store = self.convert()\n if not os.path.exists(os.path.dirname(self.store_file_path)):\n os.makedirs(os.path.dirname(self.store_file_path))\n self.store.file = self.relative_file_path\n store.savefile(self.store_file_path)\n log(u\"Created file for %s [revision: %d]\" %\n (self.store.pootle_path, last_revision))\n self.update_store_header(user=user)\n self.store.file.savestore()\n self.store.file_mtime = self.store.get_file_mtime()\n self.store.last_sync_revision = last_revision\n self.store.save()\n\n def update_newer(self, last_revision):\n return (\n not self.store.file.exists()\n or (last_revision >= self.store.last_sync_revision))\n\n @cached_property\n def dbid_index(self):\n \"\"\"build a quick mapping index between unit ids and database ids\"\"\"\n return dict(\n self.store.unit_set.live().values_list('unitid', 'id'))\n\n def sync(self, update_structure=False, conservative=True,\n user=None, only_newer=True):\n last_revision = self.store.get_max_unit_revision()\n\n # TODO only_newer -> not force\n if only_newer and not self.update_newer(last_revision):\n logging.info(\n u\"[sync] No updates for %s after [revision: %d]\",\n self.store.pootle_path, self.store.last_sync_revision)\n return\n\n if not self.store.file.exists():\n self.create_store_file(last_revision, user)\n return\n\n if conservative and self.store.is_template:\n return\n\n file_changed, changes = self.sync_store(\n last_revision,\n update_structure,\n conservative)\n self.save_store(\n last_revision,\n user,\n changes,\n (file_changed or not conservative))\n\n def sync_store(self, last_revision, update_structure, conservative):\n logging.info(u\"Syncing %s\", self.store.pootle_path)\n old_ids = set(self.disk_store.getids())\n new_ids = set(self.dbid_index.keys())\n file_changed = False\n changes = {}\n if update_structure:\n obsolete_units = self.get_units_to_obsolete(old_ids, new_ids)\n new_units = self.get_new_units(old_ids, new_ids)\n if obsolete_units or new_units:\n file_changed = True\n (changes['obsolete'],\n changes['deleted'],\n changes['added']) = self.update_structure(\n obsolete_units,\n new_units,\n conservative=conservative)\n changes[\"updated\"] = self.sync_units(\n self.get_common_units(\n set(self.dbid_index.get(uid)\n for uid\n in old_ids & new_ids),\n last_revision,\n conservative))\n return bool(file_changed or any(changes.values())), changes\n\n def save_store(self, last_revision, user, changes, updated):\n # TODO conservative -> not overwrite\n if updated:\n self.update_store_header(user=user)\n self.store.file.savestore()\n self.store.file_mtime = self.store.get_file_mtime()\n log(u\"[sync] File saved; %s units in %s [revision: %d]\" %\n (get_change_str(changes),\n self.store.pootle_path,\n last_revision))\n else:\n logging.info(\n u\"[sync] nothing changed in %s [revision: %d]\",\n self.store.pootle_path,\n last_revision)\n self.store.last_sync_revision = last_revision\n self.store.save()\n\n def get_revision_filters(self, last_revision):\n # Get units modified after last sync and before this sync started\n filter_by = {\n 'revision__lte': last_revision,\n 'store': self.store}\n # Sync all units if first sync\n if self.store.last_sync_revision is not None:\n filter_by.update({'revision__gt': self.store.last_sync_revision})\n return filter_by\n\n def get_modified_units(self, last_revision):\n return set(\n Unit.objects.filter(**self.get_revision_filters(last_revision))\n .values_list('id', flat=True).distinct()\n if last_revision > self.store.last_sync_revision\n else [])\n\n def get_common_units(self, common_dbids, last_revision, conservative):\n if conservative:\n # Sync only modified units\n common_dbids &= self.get_modified_units(last_revision)\n return self.store.findid_bulk(list(common_dbids))\n\n def sync_units(self, units):\n updated = 0\n for unit in units:\n match = self.disk_store.findid(unit.getid())\n if match is not None:\n changed = unit.sync(match)\n if changed:\n updated += 1\n return updated\n\n def update_store_header(self, **kwargs_):\n self.disk_store.settargetlanguage(self.language.code)\n self.disk_store.setsourcelanguage(self.source_language.code)\n", "path": "pootle/apps/pootle_store/syncer.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nfrom collections import namedtuple\n\nfrom translate.storage.factory import getclass\n\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.delegate import format_classes\nfrom pootle.core.log import log\nfrom pootle.core.url_helpers import split_pootle_path\n\nfrom .models import Unit\nfrom .util import get_change_str\n\n\nclass UnitSyncer(object):\n\n def __init__(self, unit):\n self.unit = unit\n\n @property\n def context(self):\n return self.unit.getcontext()\n\n @property\n def developer_notes(self):\n return self.unit.getnotes(origin=\"developer\")\n\n @property\n def isfuzzy(self):\n return self.unit.isfuzzy()\n\n @property\n def isobsolete(self):\n return self.unit.isobsolete()\n\n @property\n def locations(self):\n return self.unit.getlocations()\n\n @property\n def source(self):\n return self.unit.source\n\n @property\n def target(self):\n return self.unit.target\n\n @property\n def translator_notes(self):\n return self.unit.getnotes(origin=\"translator\")\n\n @property\n def unitid(self):\n return self.unit.getid()\n\n @property\n def unit_class(self):\n return self.unit.store.syncer.unit_class\n\n def convert(self, unitclass=None):\n newunit = self.create_unit(\n unitclass or self.unit_class)\n self.set_target(newunit)\n self.set_fuzzy(newunit)\n self.set_locations(newunit)\n self.set_developer_notes(newunit)\n self.set_translator_notes(newunit)\n self.set_unitid(newunit)\n self.set_context(newunit)\n self.set_obsolete(newunit)\n return newunit\n\n def create_unit(self, unitclass):\n return unitclass(self.source)\n\n def set_context(self, newunit):\n newunit.setcontext(self.context)\n\n def set_developer_notes(self, newunit):\n notes = self.developer_notes\n if notes:\n newunit.addnote(notes, origin=\"developer\")\n\n def set_fuzzy(self, newunit):\n newunit.markfuzzy(self.isfuzzy)\n\n def set_locations(self, newunit):\n locations = self.locations\n if locations:\n newunit.addlocations(locations)\n\n def set_obsolete(self, newunit):\n if self.isobsolete:\n newunit.makeobsolete()\n\n def set_target(self, newunit):\n newunit.target = self.target\n\n def set_translator_notes(self, newunit):\n notes = self.translator_notes\n if notes:\n newunit.addnote(notes, origin=\"translator\")\n\n def set_unitid(self, newunit):\n newunit.setid(self.unitid)\n\n\nclass StoreSyncer(object):\n unit_sync_class = UnitSyncer\n\n def __init__(self, store):\n self.store = store\n\n @cached_property\n def disk_store(self):\n return self.store.file.store\n\n @property\n def translation_project(self):\n return self.store.translation_project\n\n @property\n def language(self):\n return self.translation_project.language\n\n @property\n def project(self):\n return self.translation_project.project\n\n @property\n def source_language(self):\n return self.project.source_language\n\n @property\n def store_file_path(self):\n return os.path.join(\n self.translation_project.abs_real_path,\n *split_pootle_path(self.store.pootle_path)[2:])\n\n @property\n def relative_file_path(self):\n path_parts = split_pootle_path(self.store.pootle_path)\n path_prefix = [path_parts[1]]\n if self.project.get_treestyle() != \"gnu\":\n path_prefix.append(path_parts[0])\n return os.path.join(*(path_prefix + list(path_parts[2:])))\n\n @property\n def unit_class(self):\n return self.file_class.UnitClass\n\n @cached_property\n def file_class(self):\n # get a plugin adapted file_class\n fileclass = format_classes.gather().get(\n str(self.store.filetype.extension))\n if fileclass:\n return fileclass\n if self.store.is_template:\n # namedtuple is equiv here of object() with name attr\n return self._getclass(\n namedtuple(\"instance\", \"name\")(\n name=\".\".join(\n [os.path.splitext(self.store.name)[0],\n str(self.store.filetype.extension)])))\n return self._getclass(self.store)\n\n def convert(self, fileclass=None):\n \"\"\"export to fileclass\"\"\"\n fileclass = fileclass or self.file_class\n logging.debug(\n u\"Converting %s to %s\",\n self.store.pootle_path,\n fileclass)\n output = fileclass()\n output.settargetlanguage(self.language.code)\n # FIXME: we should add some headers\n for unit in self.store.units.iterator():\n output.addunit(\n self.unit_sync_class(unit).convert(output.UnitClass))\n return output\n\n def _getclass(self, obj):\n try:\n return getclass(obj)\n except ValueError:\n raise ValueError(\n \"Unable to find conversion class for Store '%s'\"\n % self.store.name)\n\n def get_new_units(self, old_ids, new_ids):\n return self.store.findid_bulk(\n [self.dbid_index.get(uid)\n for uid\n in new_ids - old_ids])\n\n def get_units_to_obsolete(self, old_ids, new_ids):\n for uid in old_ids - new_ids:\n unit = self.disk_store.findid(uid)\n if unit and not unit.isobsolete():\n yield unit\n\n def obsolete_unit(self, unit, conservative):\n deleted = not unit.istranslated()\n obsoleted = (\n not deleted\n and not conservative)\n if obsoleted:\n unit.makeobsolete()\n deleted = not unit.isobsolete()\n if deleted:\n del unit\n return obsoleted, deleted\n\n def update_structure(self, obsolete_units, new_units, conservative):\n obsolete = 0\n deleted = 0\n added = 0\n for unit in obsolete_units:\n _obsolete, _deleted = self.obsolete_unit(unit, conservative)\n if _obsolete:\n obsolete += 1\n if _deleted:\n deleted += 1\n for unit in new_units:\n newunit = unit.convert(self.disk_store.UnitClass)\n self.disk_store.addunit(newunit)\n added += 1\n return obsolete, deleted, added\n\n def create_store_file(self, last_revision, user):\n logging.debug(u\"Creating file %s\", self.store.pootle_path)\n store = self.convert()\n if not os.path.exists(os.path.dirname(self.store_file_path)):\n os.makedirs(os.path.dirname(self.store_file_path))\n self.store.file = self.relative_file_path\n store.savefile(self.store_file_path)\n log(u\"Created file for %s [revision: %d]\" %\n (self.store.pootle_path, last_revision))\n self.update_store_header(user=user)\n self.store.file.savestore()\n self.store.file_mtime = self.store.get_file_mtime()\n self.store.last_sync_revision = last_revision\n self.store.save()\n\n def update_newer(self, last_revision):\n return (\n not self.store.file.exists()\n or last_revision > self.store.last_sync_revision\n )\n\n @cached_property\n def dbid_index(self):\n \"\"\"build a quick mapping index between unit ids and database ids\"\"\"\n return dict(\n self.store.unit_set.live().values_list('unitid', 'id'))\n\n def sync(self, update_structure=False, conservative=True,\n user=None, only_newer=True):\n last_revision = self.store.get_max_unit_revision()\n\n # TODO only_newer -> not force\n if only_newer and not self.update_newer(last_revision):\n logging.info(\n u\"[sync] No updates for %s after [revision: %d]\",\n self.store.pootle_path, self.store.last_sync_revision)\n return\n\n if not self.store.file.exists():\n self.create_store_file(last_revision, user)\n return\n\n if conservative and self.store.is_template:\n return\n\n file_changed, changes = self.sync_store(\n last_revision,\n update_structure,\n conservative)\n self.save_store(\n last_revision,\n user,\n changes,\n (file_changed or not conservative))\n\n def sync_store(self, last_revision, update_structure, conservative):\n logging.info(u\"Syncing %s\", self.store.pootle_path)\n old_ids = set(self.disk_store.getids())\n new_ids = set(self.dbid_index.keys())\n file_changed = False\n changes = {}\n if update_structure:\n obsolete_units = self.get_units_to_obsolete(old_ids, new_ids)\n new_units = self.get_new_units(old_ids, new_ids)\n if obsolete_units or new_units:\n file_changed = True\n (changes['obsolete'],\n changes['deleted'],\n changes['added']) = self.update_structure(\n obsolete_units,\n new_units,\n conservative=conservative)\n changes[\"updated\"] = self.sync_units(\n self.get_common_units(\n set(self.dbid_index.get(uid)\n for uid\n in old_ids & new_ids),\n last_revision,\n conservative))\n return bool(file_changed or any(changes.values())), changes\n\n def save_store(self, last_revision, user, changes, updated):\n # TODO conservative -> not overwrite\n if updated:\n self.update_store_header(user=user)\n self.store.file.savestore()\n self.store.file_mtime = self.store.get_file_mtime()\n log(u\"[sync] File saved; %s units in %s [revision: %d]\" %\n (get_change_str(changes),\n self.store.pootle_path,\n last_revision))\n else:\n logging.info(\n u\"[sync] nothing changed in %s [revision: %d]\",\n self.store.pootle_path,\n last_revision)\n self.store.last_sync_revision = last_revision\n self.store.save()\n\n def get_revision_filters(self, last_revision):\n # Get units modified after last sync and before this sync started\n filter_by = {\n 'revision__lte': last_revision,\n 'store': self.store}\n # Sync all units if first sync\n if self.store.last_sync_revision is not None:\n filter_by.update({'revision__gt': self.store.last_sync_revision})\n return filter_by\n\n def get_modified_units(self, last_revision):\n return set(\n Unit.objects.filter(**self.get_revision_filters(last_revision))\n .values_list('id', flat=True).distinct()\n if last_revision > self.store.last_sync_revision\n else [])\n\n def get_common_units(self, common_dbids, last_revision, conservative):\n if conservative:\n # Sync only modified units\n common_dbids &= self.get_modified_units(last_revision)\n return self.store.findid_bulk(list(common_dbids))\n\n def sync_units(self, units):\n updated = 0\n for unit in units:\n match = self.disk_store.findid(unit.getid())\n if match is not None:\n changed = unit.sync(match)\n if changed:\n updated += 1\n return updated\n\n def update_store_header(self, **kwargs_):\n self.disk_store.settargetlanguage(self.language.code)\n self.disk_store.setsourcelanguage(self.source_language.code)\n", "path": "pootle/apps/pootle_store/syncer.py"}]} | 4,085 | 135 |
gh_patches_debug_7525 | rasdani/github-patches | git_diff | conda-forge__staged-recipes-261 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Latest conda-smithy is prevent poliastro feedstock creation
```
Repository registered at github, now call 'conda smithy register-ci'
Making feedstock for poliastro
/Users/travis/build/conda-forge/staged-recipes/recipes/poliastro has some lint:
Selectors are suggested to take a " # [<selector>]" form.
Traceback (most recent call last):
File ".CI/create_feedstocks.py", line 93, in <module>
subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])
File "/Users/travis/miniconda/lib/python3.5/subprocess.py", line 584, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['conda', 'smithy', 'recipe-lint', '/Users/travis/build/conda-forge/staged-recipes/recipes/poliastro']' returned non-zero exit status 1
```
I am working on that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `.CI/create_feedstocks.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 Convert all recipes into feedstocks.
4
5 This script is to be run in a TravisCI context, with all secret environment variables defined (BINSTAR_TOKEN, GH_TOKEN)
6 Such as:
7
8 export GH_TOKEN=$(cat ~/.conda-smithy/github.token)
9
10 """
11 from __future__ import print_function
12
13 from conda_smithy.github import gh_token
14 from contextlib import contextmanager
15 from github import Github, GithubException
16 import os.path
17 import shutil
18 import subprocess
19 import tempfile
20
21
22 # Enable DEBUG to run the diagnostics, without actually creating new feedstocks.
23 DEBUG = False
24
25
26 def list_recipes():
27 recipe_directory_name = 'recipes'
28 if os.path.isdir(recipe_directory_name):
29 recipes = os.listdir(recipe_directory_name)
30 else:
31 recipes = []
32
33 for recipe_dir in recipes:
34 # We don't list the "example" feedstock. It is an example, and is there
35 # to be helpful.
36 if recipe_dir.startswith('example'):
37 continue
38 path = os.path.abspath(os.path.join(recipe_directory_name, recipe_dir))
39 yield path, recipe_dir
40
41
42 @contextmanager
43 def tmp_dir(*args, **kwargs):
44 temp_dir = tempfile.mkdtemp(*args, **kwargs)
45 try:
46 yield temp_dir
47 finally:
48 shutil.rmtree(temp_dir)
49
50
51 def repo_exists(organization, name):
52 token = gh_token()
53 gh = Github(token)
54 # Use the organization provided.
55 org = gh.get_organization(organization)
56 try:
57 org.get_repo(name)
58 return True
59 except GithubException as e:
60 if e.status == 404:
61 return False
62 raise
63
64
65 if __name__ == '__main__':
66 is_merged_pr = (os.environ.get('TRAVIS_BRANCH') == 'master' and os.environ.get('TRAVIS_PULL_REQUEST') == 'false')
67
68 smithy_conf = os.path.expanduser('~/.conda-smithy')
69 if not os.path.exists(smithy_conf):
70 os.mkdir(smithy_conf)
71
72 def write_token(name, token):
73 with open(os.path.join(smithy_conf, name + '.token'), 'w') as fh:
74 fh.write(token)
75 if 'APPVEYOR_TOKEN' in os.environ:
76 write_token('appveyor', os.environ['APPVEYOR_TOKEN'])
77 if 'CIRCLE_TOKEN' in os.environ:
78 write_token('circle', os.environ['CIRCLE_TOKEN'])
79 if 'GH_TOKEN' in os.environ:
80 write_token('github', os.environ['GH_TOKEN'])
81
82 owner_info = ['--organization', 'conda-forge']
83
84 print('Calculating the recipes which need to be turned into feedstocks.')
85 removed_recipes = []
86 with tmp_dir('__feedstocks') as feedstocks_dir:
87 feedstock_dirs = []
88 for recipe_dir, name in list_recipes():
89 feedstock_dir = os.path.join(feedstocks_dir, name + '-feedstock')
90 os.mkdir(feedstock_dir)
91 print('Making feedstock for {}'.format(name))
92
93 subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])
94
95 subprocess.check_call(['conda', 'smithy', 'init', recipe_dir,
96 '--feedstock-directory', feedstock_dir])
97 if not is_merged_pr:
98 # We just want to check that conda-smithy is doing its thing without having any metadata issues.
99 continue
100
101 feedstock_dirs.append([feedstock_dir, name, recipe_dir])
102
103 subprocess.check_call(['git', 'remote', 'add', 'upstream_with_token',
104 'https://conda-forge-admin:{}@github.com/conda-forge/{}'.format(os.environ['GH_TOKEN'],
105 os.path.basename(feedstock_dir))],
106 cwd=feedstock_dir)
107
108 # Sometimes we already have the feedstock created. We need to deal with that case.
109 if repo_exists('conda-forge', os.path.basename(feedstock_dir)):
110 subprocess.check_call(['git', 'fetch', 'upstream_with_token'], cwd=feedstock_dir)
111 subprocess.check_call(['git', 'branch', '-m', 'master', 'old'], cwd=feedstock_dir)
112 try:
113 subprocess.check_call(['git', 'checkout', '-b', 'master', 'upstream_with_token/master'], cwd=feedstock_dir)
114 except subprocess.CalledProcessError:
115 # Sometimes, we have a repo, but there are no commits on it! Just catch that case.
116 subprocess.check_call(['git', 'checkout', '-b' 'master'], cwd=feedstock_dir)
117 else:
118 subprocess.check_call(['conda', 'smithy', 'register-github', feedstock_dir] + owner_info)
119
120 # Break the previous loop to allow the TravisCI registering to take place only once per function call.
121 # Without this, intermittent failiures to synch the TravisCI repos ensue.
122 for feedstock_dir, name, recipe_dir in feedstock_dirs:
123 subprocess.check_call(['conda', 'smithy', 'register-ci', '--feedstock_directory', feedstock_dir] + owner_info)
124
125 subprocess.check_call(['conda', 'smithy', 'rerender'], cwd=feedstock_dir)
126 subprocess.check_call(['git', 'commit', '-am', "Re-render the feedstock after CI registration."], cwd=feedstock_dir)
127 # Capture the output, as it may contain the GH_TOKEN.
128 out = subprocess.check_output(['git', 'push', 'upstream_with_token', 'master'], cwd=feedstock_dir,
129 stderr=subprocess.STDOUT)
130
131 # Remove this recipe from the repo.
132 removed_recipes.append(name)
133 if is_merged_pr:
134 subprocess.check_call(['git', 'rm', '-r', recipe_dir])
135
136 # Commit any removed packages.
137 subprocess.check_call(['git', 'status'])
138 if removed_recipes:
139 subprocess.check_call(['git', 'checkout', os.environ.get('TRAVIS_BRANCH')])
140 msg = ('Removed recipe{s} ({}) after converting into feedstock{s}. '
141 '[ci skip]'.format(', '.join(removed_recipes),
142 s=('s' if len(removed_recipes) > 1 else '')))
143 if is_merged_pr:
144 # Capture the output, as it may contain the GH_TOKEN.
145 out = subprocess.check_output(['git', 'remote', 'add', 'upstream_with_token',
146 'https://conda-forge-admin:{}@github.com/conda-forge/staged-recipes'.format(os.environ['GH_TOKEN'])],
147 stderr=subprocess.STDOUT)
148 subprocess.check_call(['git', 'commit', '-m', msg])
149 # Capture the output, as it may contain the GH_TOKEN.
150 out = subprocess.check_output(['git', 'push', 'upstream_with_token', os.environ.get('TRAVIS_BRANCH')],
151 stderr=subprocess.STDOUT)
152 else:
153 print('Would git commit, with the following message: \n {}'.format(msg))
154
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/.CI/create_feedstocks.py b/.CI/create_feedstocks.py
--- a/.CI/create_feedstocks.py
+++ b/.CI/create_feedstocks.py
@@ -90,8 +90,6 @@
os.mkdir(feedstock_dir)
print('Making feedstock for {}'.format(name))
- subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])
-
subprocess.check_call(['conda', 'smithy', 'init', recipe_dir,
'--feedstock-directory', feedstock_dir])
if not is_merged_pr:
| {"golden_diff": "diff --git a/.CI/create_feedstocks.py b/.CI/create_feedstocks.py\n--- a/.CI/create_feedstocks.py\n+++ b/.CI/create_feedstocks.py\n@@ -90,8 +90,6 @@\n os.mkdir(feedstock_dir)\n print('Making feedstock for {}'.format(name))\n \n- subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])\n-\n subprocess.check_call(['conda', 'smithy', 'init', recipe_dir,\n '--feedstock-directory', feedstock_dir])\n if not is_merged_pr:\n", "issue": "Latest conda-smithy is prevent poliastro feedstock creation\n```\nRepository registered at github, now call 'conda smithy register-ci'\nMaking feedstock for poliastro\n/Users/travis/build/conda-forge/staged-recipes/recipes/poliastro has some lint:\n Selectors are suggested to take a \" # [<selector>]\" form.\nTraceback (most recent call last):\n File \".CI/create_feedstocks.py\", line 93, in <module>\n subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])\n File \"/Users/travis/miniconda/lib/python3.5/subprocess.py\", line 584, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command '['conda', 'smithy', 'recipe-lint', '/Users/travis/build/conda-forge/staged-recipes/recipes/poliastro']' returned non-zero exit status 1\n```\n\nI am working on that.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nConvert all recipes into feedstocks.\n\nThis script is to be run in a TravisCI context, with all secret environment variables defined (BINSTAR_TOKEN, GH_TOKEN)\nSuch as:\n\n export GH_TOKEN=$(cat ~/.conda-smithy/github.token)\n\n\"\"\"\nfrom __future__ import print_function\n\nfrom conda_smithy.github import gh_token\nfrom contextlib import contextmanager\nfrom github import Github, GithubException\nimport os.path\nimport shutil\nimport subprocess\nimport tempfile\n\n\n# Enable DEBUG to run the diagnostics, without actually creating new feedstocks.\nDEBUG = False\n\n\ndef list_recipes():\n recipe_directory_name = 'recipes'\n if os.path.isdir(recipe_directory_name):\n recipes = os.listdir(recipe_directory_name)\n else:\n recipes = []\n\n for recipe_dir in recipes:\n # We don't list the \"example\" feedstock. It is an example, and is there\n # to be helpful.\n if recipe_dir.startswith('example'):\n continue\n path = os.path.abspath(os.path.join(recipe_directory_name, recipe_dir))\n yield path, recipe_dir\n\n\n@contextmanager\ndef tmp_dir(*args, **kwargs):\n temp_dir = tempfile.mkdtemp(*args, **kwargs)\n try:\n yield temp_dir\n finally:\n shutil.rmtree(temp_dir)\n\n\ndef repo_exists(organization, name):\n token = gh_token()\n gh = Github(token)\n # Use the organization provided.\n org = gh.get_organization(organization)\n try:\n org.get_repo(name)\n return True\n except GithubException as e:\n if e.status == 404:\n return False\n raise\n\n\nif __name__ == '__main__':\n is_merged_pr = (os.environ.get('TRAVIS_BRANCH') == 'master' and os.environ.get('TRAVIS_PULL_REQUEST') == 'false')\n\n smithy_conf = os.path.expanduser('~/.conda-smithy')\n if not os.path.exists(smithy_conf):\n os.mkdir(smithy_conf)\n\n def write_token(name, token):\n with open(os.path.join(smithy_conf, name + '.token'), 'w') as fh:\n fh.write(token)\n if 'APPVEYOR_TOKEN' in os.environ:\n write_token('appveyor', os.environ['APPVEYOR_TOKEN'])\n if 'CIRCLE_TOKEN' in os.environ:\n write_token('circle', os.environ['CIRCLE_TOKEN'])\n if 'GH_TOKEN' in os.environ:\n write_token('github', os.environ['GH_TOKEN'])\n\n owner_info = ['--organization', 'conda-forge']\n\n print('Calculating the recipes which need to be turned into feedstocks.')\n removed_recipes = []\n with tmp_dir('__feedstocks') as feedstocks_dir:\n feedstock_dirs = []\n for recipe_dir, name in list_recipes():\n feedstock_dir = os.path.join(feedstocks_dir, name + '-feedstock')\n os.mkdir(feedstock_dir)\n print('Making feedstock for {}'.format(name))\n\n subprocess.check_call(['conda', 'smithy', 'recipe-lint', recipe_dir])\n\n subprocess.check_call(['conda', 'smithy', 'init', recipe_dir,\n '--feedstock-directory', feedstock_dir])\n if not is_merged_pr:\n # We just want to check that conda-smithy is doing its thing without having any metadata issues.\n continue\n\n feedstock_dirs.append([feedstock_dir, name, recipe_dir])\n\n subprocess.check_call(['git', 'remote', 'add', 'upstream_with_token',\n 'https://conda-forge-admin:{}@github.com/conda-forge/{}'.format(os.environ['GH_TOKEN'],\n os.path.basename(feedstock_dir))],\n cwd=feedstock_dir)\n\n # Sometimes we already have the feedstock created. We need to deal with that case.\n if repo_exists('conda-forge', os.path.basename(feedstock_dir)):\n subprocess.check_call(['git', 'fetch', 'upstream_with_token'], cwd=feedstock_dir)\n subprocess.check_call(['git', 'branch', '-m', 'master', 'old'], cwd=feedstock_dir)\n try:\n subprocess.check_call(['git', 'checkout', '-b', 'master', 'upstream_with_token/master'], cwd=feedstock_dir)\n except subprocess.CalledProcessError:\n # Sometimes, we have a repo, but there are no commits on it! Just catch that case.\n subprocess.check_call(['git', 'checkout', '-b' 'master'], cwd=feedstock_dir)\n else:\n subprocess.check_call(['conda', 'smithy', 'register-github', feedstock_dir] + owner_info)\n\n # Break the previous loop to allow the TravisCI registering to take place only once per function call.\n # Without this, intermittent failiures to synch the TravisCI repos ensue.\n for feedstock_dir, name, recipe_dir in feedstock_dirs:\n subprocess.check_call(['conda', 'smithy', 'register-ci', '--feedstock_directory', feedstock_dir] + owner_info)\n\n subprocess.check_call(['conda', 'smithy', 'rerender'], cwd=feedstock_dir)\n subprocess.check_call(['git', 'commit', '-am', \"Re-render the feedstock after CI registration.\"], cwd=feedstock_dir)\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'push', 'upstream_with_token', 'master'], cwd=feedstock_dir,\n stderr=subprocess.STDOUT)\n\n # Remove this recipe from the repo.\n removed_recipes.append(name)\n if is_merged_pr:\n subprocess.check_call(['git', 'rm', '-r', recipe_dir])\n\n # Commit any removed packages.\n subprocess.check_call(['git', 'status'])\n if removed_recipes:\n subprocess.check_call(['git', 'checkout', os.environ.get('TRAVIS_BRANCH')])\n msg = ('Removed recipe{s} ({}) after converting into feedstock{s}. '\n '[ci skip]'.format(', '.join(removed_recipes),\n s=('s' if len(removed_recipes) > 1 else '')))\n if is_merged_pr:\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'remote', 'add', 'upstream_with_token',\n 'https://conda-forge-admin:{}@github.com/conda-forge/staged-recipes'.format(os.environ['GH_TOKEN'])],\n stderr=subprocess.STDOUT)\n subprocess.check_call(['git', 'commit', '-m', msg])\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'push', 'upstream_with_token', os.environ.get('TRAVIS_BRANCH')],\n stderr=subprocess.STDOUT)\n else:\n print('Would git commit, with the following message: \\n {}'.format(msg))\n", "path": ".CI/create_feedstocks.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nConvert all recipes into feedstocks.\n\nThis script is to be run in a TravisCI context, with all secret environment variables defined (BINSTAR_TOKEN, GH_TOKEN)\nSuch as:\n\n export GH_TOKEN=$(cat ~/.conda-smithy/github.token)\n\n\"\"\"\nfrom __future__ import print_function\n\nfrom conda_smithy.github import gh_token\nfrom contextlib import contextmanager\nfrom github import Github, GithubException\nimport os.path\nimport shutil\nimport subprocess\nimport tempfile\n\n\n# Enable DEBUG to run the diagnostics, without actually creating new feedstocks.\nDEBUG = False\n\n\ndef list_recipes():\n recipe_directory_name = 'recipes'\n if os.path.isdir(recipe_directory_name):\n recipes = os.listdir(recipe_directory_name)\n else:\n recipes = []\n\n for recipe_dir in recipes:\n # We don't list the \"example\" feedstock. It is an example, and is there\n # to be helpful.\n if recipe_dir.startswith('example'):\n continue\n path = os.path.abspath(os.path.join(recipe_directory_name, recipe_dir))\n yield path, recipe_dir\n\n\n@contextmanager\ndef tmp_dir(*args, **kwargs):\n temp_dir = tempfile.mkdtemp(*args, **kwargs)\n try:\n yield temp_dir\n finally:\n shutil.rmtree(temp_dir)\n\n\ndef repo_exists(organization, name):\n token = gh_token()\n gh = Github(token)\n # Use the organization provided.\n org = gh.get_organization(organization)\n try:\n org.get_repo(name)\n return True\n except GithubException as e:\n if e.status == 404:\n return False\n raise\n\n\nif __name__ == '__main__':\n is_merged_pr = (os.environ.get('TRAVIS_BRANCH') == 'master' and os.environ.get('TRAVIS_PULL_REQUEST') == 'false')\n\n smithy_conf = os.path.expanduser('~/.conda-smithy')\n if not os.path.exists(smithy_conf):\n os.mkdir(smithy_conf)\n\n def write_token(name, token):\n with open(os.path.join(smithy_conf, name + '.token'), 'w') as fh:\n fh.write(token)\n if 'APPVEYOR_TOKEN' in os.environ:\n write_token('appveyor', os.environ['APPVEYOR_TOKEN'])\n if 'CIRCLE_TOKEN' in os.environ:\n write_token('circle', os.environ['CIRCLE_TOKEN'])\n if 'GH_TOKEN' in os.environ:\n write_token('github', os.environ['GH_TOKEN'])\n\n owner_info = ['--organization', 'conda-forge']\n\n print('Calculating the recipes which need to be turned into feedstocks.')\n removed_recipes = []\n with tmp_dir('__feedstocks') as feedstocks_dir:\n feedstock_dirs = []\n for recipe_dir, name in list_recipes():\n feedstock_dir = os.path.join(feedstocks_dir, name + '-feedstock')\n os.mkdir(feedstock_dir)\n print('Making feedstock for {}'.format(name))\n\n subprocess.check_call(['conda', 'smithy', 'init', recipe_dir,\n '--feedstock-directory', feedstock_dir])\n if not is_merged_pr:\n # We just want to check that conda-smithy is doing its thing without having any metadata issues.\n continue\n\n feedstock_dirs.append([feedstock_dir, name, recipe_dir])\n\n subprocess.check_call(['git', 'remote', 'add', 'upstream_with_token',\n 'https://conda-forge-admin:{}@github.com/conda-forge/{}'.format(os.environ['GH_TOKEN'],\n os.path.basename(feedstock_dir))],\n cwd=feedstock_dir)\n\n # Sometimes we already have the feedstock created. We need to deal with that case.\n if repo_exists('conda-forge', os.path.basename(feedstock_dir)):\n subprocess.check_call(['git', 'fetch', 'upstream_with_token'], cwd=feedstock_dir)\n subprocess.check_call(['git', 'branch', '-m', 'master', 'old'], cwd=feedstock_dir)\n try:\n subprocess.check_call(['git', 'checkout', '-b', 'master', 'upstream_with_token/master'], cwd=feedstock_dir)\n except subprocess.CalledProcessError:\n # Sometimes, we have a repo, but there are no commits on it! Just catch that case.\n subprocess.check_call(['git', 'checkout', '-b' 'master'], cwd=feedstock_dir)\n else:\n subprocess.check_call(['conda', 'smithy', 'register-github', feedstock_dir] + owner_info)\n\n # Break the previous loop to allow the TravisCI registering to take place only once per function call.\n # Without this, intermittent failiures to synch the TravisCI repos ensue.\n for feedstock_dir, name, recipe_dir in feedstock_dirs:\n subprocess.check_call(['conda', 'smithy', 'register-ci', '--feedstock_directory', feedstock_dir] + owner_info)\n\n subprocess.check_call(['conda', 'smithy', 'rerender'], cwd=feedstock_dir)\n subprocess.check_call(['git', 'commit', '-am', \"Re-render the feedstock after CI registration.\"], cwd=feedstock_dir)\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'push', 'upstream_with_token', 'master'], cwd=feedstock_dir,\n stderr=subprocess.STDOUT)\n\n # Remove this recipe from the repo.\n removed_recipes.append(name)\n if is_merged_pr:\n subprocess.check_call(['git', 'rm', '-r', recipe_dir])\n\n # Commit any removed packages.\n subprocess.check_call(['git', 'status'])\n if removed_recipes:\n subprocess.check_call(['git', 'checkout', os.environ.get('TRAVIS_BRANCH')])\n msg = ('Removed recipe{s} ({}) after converting into feedstock{s}. '\n '[ci skip]'.format(', '.join(removed_recipes),\n s=('s' if len(removed_recipes) > 1 else '')))\n if is_merged_pr:\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'remote', 'add', 'upstream_with_token',\n 'https://conda-forge-admin:{}@github.com/conda-forge/staged-recipes'.format(os.environ['GH_TOKEN'])],\n stderr=subprocess.STDOUT)\n subprocess.check_call(['git', 'commit', '-m', msg])\n # Capture the output, as it may contain the GH_TOKEN.\n out = subprocess.check_output(['git', 'push', 'upstream_with_token', os.environ.get('TRAVIS_BRANCH')],\n stderr=subprocess.STDOUT)\n else:\n print('Would git commit, with the following message: \\n {}'.format(msg))\n", "path": ".CI/create_feedstocks.py"}]} | 2,313 | 122 |
gh_patches_debug_4516 | rasdani/github-patches | git_diff | Parsl__parsl-1650 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PBSPro] max_blocks limit is not obeyed when submitting as array-jobs
**Describe the bug**
Parsl keeps launching new blocks beyond the `max_blocks` limit when array-jobs mode is enabled using Parsl config `scheduler_options` parameter (e.g. #PBS -J 1-10).
**To Reproduce**
Enable Parsl monitoring and checkpointing
Enable HighThroughputExecutor and PBSProProvider
Add `#PBS -J 1-10` option to `scheduler_options`
Set `max_blocks` limit to 3
**Expected behavior**
No more than 3 blocks should be launched.
**Actual behavior**
Parsl keeps on launching new blocks. Following polling log can be seen
`2020-04-23 14:41:33.575 parsl.dataflow.strategy:205 [DEBUG] Executor htex_array_jobs has 2739 active tasks, 0/4 running/pending blocks, and 60 connected workers`
This means Parsl does not consider partially activated blocks (array-jobs with some jobs Running others in Queue status out of total array jobs in that block) when making a launch decision.
It seems that conditional check is done here [1] but I couldn't find a place where the `provisioned_blocks` variable is updated. Could you shed some light on how this is updated?
**Environment**
- OS: RHEL6.1
- Python version: 3.7 (Anaconda 4.8.2)
- Parsl version: master branch commit: a30ce173cf8593a34b81d5a9cdd646dcf63fa798
**Distributed Environment**
- PBS Pro in NSCC's ASPIRE1
[1] https://github.com/Parsl/parsl/blob/master/parsl/providers/pbspro/pbspro.py#L109
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/providers/torque/torque.py`
Content:
```
1 import logging
2 import os
3 import time
4
5 from parsl.channels import LocalChannel
6 from parsl.launchers import AprunLauncher
7 from parsl.providers.provider_base import JobState, JobStatus
8 from parsl.providers.torque.template import template_string
9 from parsl.providers.cluster_provider import ClusterProvider
10 from parsl.utils import RepresentationMixin
11
12 logger = logging.getLogger(__name__)
13
14 # From the man pages for qstat for PBS/Torque systems
15 translate_table = {
16 'R': JobState.RUNNING,
17 'C': JobState.COMPLETED, # Completed after having run
18 'E': JobState.COMPLETED, # Exiting after having run
19 'H': JobState.HELD, # Held
20 'Q': JobState.PENDING, # Queued, and eligible to run
21 'W': JobState.PENDING, # Job is waiting for it's execution time (-a option) to be reached
22 'S': JobState.HELD
23 } # Suspended
24
25
26 class TorqueProvider(ClusterProvider, RepresentationMixin):
27 """Torque Execution Provider
28
29 This provider uses sbatch to submit, squeue for status, and scancel to cancel
30 jobs. The sbatch script to be used is created from a template file in this
31 same module.
32
33 Parameters
34 ----------
35 channel : Channel
36 Channel for accessing this provider. Possible channels include
37 :class:`~parsl.channels.LocalChannel` (the default),
38 :class:`~parsl.channels.SSHChannel`, or
39 :class:`~parsl.channels.SSHInteractiveLoginChannel`.
40 account : str
41 Account the job will be charged against.
42 queue : str
43 Torque queue to request blocks from.
44 nodes_per_block : int
45 Nodes to provision per block.
46 init_blocks : int
47 Number of blocks to provision at the start of the run. Default is 1.
48 min_blocks : int
49 Minimum number of blocks to maintain. Default is 0.
50 max_blocks : int
51 Maximum number of blocks to maintain.
52 parallelism : float
53 Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive
54 scaling where as many resources as possible are used; parallelism close to 0 represents
55 the opposite situation in which as few resources as possible (i.e., min_blocks) are used.
56 walltime : str
57 Walltime requested per block in HH:MM:SS.
58 scheduler_options : str
59 String to prepend to the #PBS blocks in the submit script to the scheduler.
60 worker_init : str
61 Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.
62 launcher : Launcher
63 Launcher for this provider. Possible launchers include
64 :class:`~parsl.launchers.AprunLauncher` (the default), or
65 :class:`~parsl.launchers.SingleNodeLauncher`,
66
67 """
68 def __init__(self,
69 channel=LocalChannel(),
70 account=None,
71 queue=None,
72 scheduler_options='',
73 worker_init='',
74 nodes_per_block=1,
75 init_blocks=1,
76 min_blocks=0,
77 max_blocks=100,
78 parallelism=1,
79 launcher=AprunLauncher(),
80 walltime="00:20:00",
81 cmd_timeout=120):
82 label = 'torque'
83 super().__init__(label,
84 channel,
85 nodes_per_block,
86 init_blocks,
87 min_blocks,
88 max_blocks,
89 parallelism,
90 walltime,
91 launcher,
92 cmd_timeout=cmd_timeout)
93
94 self.account = account
95 self.queue = queue
96 self.scheduler_options = scheduler_options
97 self.worker_init = worker_init
98 self.provisioned_blocks = 0
99 self.template_string = template_string
100
101 # Dictionary that keeps track of jobs, keyed on job_id
102 self.resources = {}
103
104 def _status(self):
105 ''' Internal: Do not call. Returns the status list for a list of job_ids
106
107 Args:
108 self
109
110 Returns:
111 [status...] : Status list of all jobs
112 '''
113
114 job_ids = list(self.resources.keys())
115 job_id_list = ' '.join(self.resources.keys())
116
117 jobs_missing = list(self.resources.keys())
118
119 retcode, stdout, stderr = self.execute_wait("qstat {0}".format(job_id_list))
120 for line in stdout.split('\n'):
121 parts = line.split()
122 if not parts or parts[0].upper().startswith('JOB') or parts[0].startswith('---'):
123 continue
124 job_id = parts[0] # likely truncated
125 for long_job_id in job_ids:
126 if long_job_id.startswith(job_id):
127 logger.debug('coerced job_id %s -> %s', job_id, long_job_id)
128 job_id = long_job_id
129 break
130 state = translate_table.get(parts[4], JobState.UNKNOWN)
131 self.resources[job_id]['status'] = JobStatus(state)
132 jobs_missing.remove(job_id)
133
134 # squeue does not report on jobs that are not running. So we are filling in the
135 # blanks for missing jobs, we might lose some information about why the jobs failed.
136 for missing_job in jobs_missing:
137 self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)
138
139 def submit(self, command, tasks_per_node, job_name="parsl.torque"):
140 ''' Submits the command onto an Local Resource Manager job.
141 Submit returns an ID that corresponds to the task that was just submitted.
142
143 If tasks_per_node < 1 : ! This is illegal. tasks_per_node should be integer
144
145 If tasks_per_node == 1:
146 A single node is provisioned
147
148 If tasks_per_node > 1 :
149 tasks_per_node number of nodes are provisioned.
150
151 Args:
152 - command :(String) Commandline invocation to be made on the remote side.
153 - tasks_per_node (int) : command invocations to be launched per node
154
155 Kwargs:
156 - job_name (String): Name for job, must be unique
157
158 Returns:
159 - None: At capacity, cannot provision more
160 - job_id: (string) Identifier for the job
161
162 '''
163
164 if self.provisioned_blocks >= self.max_blocks:
165 logger.warning("[%s] at capacity, cannot add more blocks now", self.label)
166 return None
167
168 # Set job name
169 job_name = "parsl.{0}.{1}".format(job_name, time.time())
170
171 # Set script path
172 script_path = "{0}/{1}.submit".format(self.script_dir, job_name)
173 script_path = os.path.abspath(script_path)
174
175 logger.debug("Requesting nodes_per_block:%s tasks_per_node:%s", self.nodes_per_block,
176 tasks_per_node)
177
178 job_config = {}
179 # TODO : script_path might need to change to accommodate script dir set via channels
180 job_config["submit_script_dir"] = self.channel.script_dir
181 job_config["nodes"] = self.nodes_per_block
182 job_config["task_blocks"] = self.nodes_per_block * tasks_per_node
183 job_config["nodes_per_block"] = self.nodes_per_block
184 job_config["tasks_per_node"] = tasks_per_node
185 job_config["walltime"] = self.walltime
186 job_config["scheduler_options"] = self.scheduler_options
187 job_config["worker_init"] = self.worker_init
188 job_config["user_script"] = command
189
190 # Wrap the command
191 job_config["user_script"] = self.launcher(command,
192 tasks_per_node,
193 self.nodes_per_block)
194
195 logger.debug("Writing submit script")
196 self._write_submit_script(self.template_string, script_path, job_name, job_config)
197
198 channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)
199
200 submit_options = ''
201 if self.queue is not None:
202 submit_options = '{0} -q {1}'.format(submit_options, self.queue)
203 if self.account is not None:
204 submit_options = '{0} -A {1}'.format(submit_options, self.account)
205
206 launch_cmd = "qsub {0} {1}".format(submit_options, channel_script_path)
207 retcode, stdout, stderr = self.execute_wait(launch_cmd)
208
209 job_id = None
210 if retcode == 0:
211 for line in stdout.split('\n'):
212 if line.strip():
213 job_id = line.strip()
214 self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}
215 else:
216 message = "Command '{}' failed with return code {}".format(launch_cmd, retcode)
217 if (stdout is not None) and (stderr is not None):
218 message += "\nstderr:{}\nstdout{}".format(stderr.strip(), stdout.strip())
219 logger.error(message)
220
221 return job_id
222
223 def cancel(self, job_ids):
224 ''' Cancels the jobs specified by a list of job ids
225
226 Args:
227 job_ids : [<job_id> ...]
228
229 Returns :
230 [True/False...] : If the cancel operation fails the entire list will be False.
231 '''
232
233 job_id_list = ' '.join(job_ids)
234 retcode, stdout, stderr = self.execute_wait("qdel {0}".format(job_id_list))
235 rets = None
236 if retcode == 0:
237 for jid in job_ids:
238 self.resources[jid]['status'] = JobStatus(JobState.COMPLETED) # Setting state to exiting
239 rets = [True for i in job_ids]
240 else:
241 rets = [False for i in job_ids]
242
243 return rets
244
245 @property
246 def status_polling_interval(self):
247 return 60
248
249
250 if __name__ == "__main__":
251
252 print("None")
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/providers/torque/torque.py b/parsl/providers/torque/torque.py
--- a/parsl/providers/torque/torque.py
+++ b/parsl/providers/torque/torque.py
@@ -13,6 +13,7 @@
# From the man pages for qstat for PBS/Torque systems
translate_table = {
+ 'B': JobState.RUNNING, # This state is returned for running array jobs
'R': JobState.RUNNING,
'C': JobState.COMPLETED, # Completed after having run
'E': JobState.COMPLETED, # Exiting after having run
| {"golden_diff": "diff --git a/parsl/providers/torque/torque.py b/parsl/providers/torque/torque.py\n--- a/parsl/providers/torque/torque.py\n+++ b/parsl/providers/torque/torque.py\n@@ -13,6 +13,7 @@\n \n # From the man pages for qstat for PBS/Torque systems\n translate_table = {\n+ 'B': JobState.RUNNING, # This state is returned for running array jobs\n 'R': JobState.RUNNING,\n 'C': JobState.COMPLETED, # Completed after having run\n 'E': JobState.COMPLETED, # Exiting after having run\n", "issue": "[PBSPro] max_blocks limit is not obeyed when submitting as array-jobs \n**Describe the bug**\r\nParsl keeps launching new blocks beyond the `max_blocks` limit when array-jobs mode is enabled using Parsl config `scheduler_options` parameter (e.g. #PBS -J 1-10).\r\n\r\n**To Reproduce**\r\nEnable Parsl monitoring and checkpointing\r\nEnable HighThroughputExecutor and PBSProProvider\r\nAdd `#PBS -J 1-10` option to `scheduler_options`\r\nSet `max_blocks` limit to 3\r\n\r\n**Expected behavior**\r\nNo more than 3 blocks should be launched.\r\n\r\n**Actual behavior**\r\nParsl keeps on launching new blocks. Following polling log can be seen\r\n`2020-04-23 14:41:33.575 parsl.dataflow.strategy:205 [DEBUG] Executor htex_array_jobs has 2739 active tasks, 0/4 running/pending blocks, and 60 connected workers`\r\n\r\nThis means Parsl does not consider partially activated blocks (array-jobs with some jobs Running others in Queue status out of total array jobs in that block) when making a launch decision.\r\nIt seems that conditional check is done here [1] but I couldn't find a place where the `provisioned_blocks` variable is updated. Could you shed some light on how this is updated?\r\n\r\n**Environment**\r\n- OS: RHEL6.1\r\n- Python version: 3.7 (Anaconda 4.8.2)\r\n- Parsl version: master branch commit: a30ce173cf8593a34b81d5a9cdd646dcf63fa798\r\n\r\n**Distributed Environment**\r\n- PBS Pro in NSCC's ASPIRE1\r\n\r\n[1] https://github.com/Parsl/parsl/blob/master/parsl/providers/pbspro/pbspro.py#L109\n", "before_files": [{"content": "import logging\nimport os\nimport time\n\nfrom parsl.channels import LocalChannel\nfrom parsl.launchers import AprunLauncher\nfrom parsl.providers.provider_base import JobState, JobStatus\nfrom parsl.providers.torque.template import template_string\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.utils import RepresentationMixin\n\nlogger = logging.getLogger(__name__)\n\n# From the man pages for qstat for PBS/Torque systems\ntranslate_table = {\n 'R': JobState.RUNNING,\n 'C': JobState.COMPLETED, # Completed after having run\n 'E': JobState.COMPLETED, # Exiting after having run\n 'H': JobState.HELD, # Held\n 'Q': JobState.PENDING, # Queued, and eligible to run\n 'W': JobState.PENDING, # Job is waiting for it's execution time (-a option) to be reached\n 'S': JobState.HELD\n} # Suspended\n\n\nclass TorqueProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Torque Execution Provider\n\n This provider uses sbatch to submit, squeue for status, and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n account : str\n Account the job will be charged against.\n queue : str\n Torque queue to request blocks from.\n nodes_per_block : int\n Nodes to provision per block.\n init_blocks : int\n Number of blocks to provision at the start of the run. Default is 1.\n min_blocks : int\n Minimum number of blocks to maintain. Default is 0.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #PBS blocks in the submit script to the scheduler.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.AprunLauncher` (the default), or\n :class:`~parsl.launchers.SingleNodeLauncher`,\n\n \"\"\"\n def __init__(self,\n channel=LocalChannel(),\n account=None,\n queue=None,\n scheduler_options='',\n worker_init='',\n nodes_per_block=1,\n init_blocks=1,\n min_blocks=0,\n max_blocks=100,\n parallelism=1,\n launcher=AprunLauncher(),\n walltime=\"00:20:00\",\n cmd_timeout=120):\n label = 'torque'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n launcher,\n cmd_timeout=cmd_timeout)\n\n self.account = account\n self.queue = queue\n self.scheduler_options = scheduler_options\n self.worker_init = worker_init\n self.provisioned_blocks = 0\n self.template_string = template_string\n\n # Dictionary that keeps track of jobs, keyed on job_id\n self.resources = {}\n\n def _status(self):\n ''' Internal: Do not call. Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n\n job_ids = list(self.resources.keys())\n job_id_list = ' '.join(self.resources.keys())\n\n jobs_missing = list(self.resources.keys())\n\n retcode, stdout, stderr = self.execute_wait(\"qstat {0}\".format(job_id_list))\n for line in stdout.split('\\n'):\n parts = line.split()\n if not parts or parts[0].upper().startswith('JOB') or parts[0].startswith('---'):\n continue\n job_id = parts[0] # likely truncated\n for long_job_id in job_ids:\n if long_job_id.startswith(job_id):\n logger.debug('coerced job_id %s -> %s', job_id, long_job_id)\n job_id = long_job_id\n break\n state = translate_table.get(parts[4], JobState.UNKNOWN)\n self.resources[job_id]['status'] = JobStatus(state)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.torque\"):\n ''' Submits the command onto an Local Resource Manager job.\n Submit returns an ID that corresponds to the task that was just submitted.\n\n If tasks_per_node < 1 : ! This is illegal. tasks_per_node should be integer\n\n If tasks_per_node == 1:\n A single node is provisioned\n\n If tasks_per_node > 1 :\n tasks_per_node number of nodes are provisioned.\n\n Args:\n - command :(String) Commandline invocation to be made on the remote side.\n - tasks_per_node (int) : command invocations to be launched per node\n\n Kwargs:\n - job_name (String): Name for job, must be unique\n\n Returns:\n - None: At capacity, cannot provision more\n - job_id: (string) Identifier for the job\n\n '''\n\n if self.provisioned_blocks >= self.max_blocks:\n logger.warning(\"[%s] at capacity, cannot add more blocks now\", self.label)\n return None\n\n # Set job name\n job_name = \"parsl.{0}.{1}\".format(job_name, time.time())\n\n # Set script path\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting nodes_per_block:%s tasks_per_node:%s\", self.nodes_per_block,\n tasks_per_node)\n\n job_config = {}\n # TODO : script_path might need to change to accommodate script dir set via channels\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"task_blocks\"] = self.nodes_per_block * tasks_per_node\n job_config[\"nodes_per_block\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = self.walltime\n job_config[\"scheduler_options\"] = self.scheduler_options\n job_config[\"worker_init\"] = self.worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(self.template_string, script_path, job_name, job_config)\n\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n\n submit_options = ''\n if self.queue is not None:\n submit_options = '{0} -q {1}'.format(submit_options, self.queue)\n if self.account is not None:\n submit_options = '{0} -A {1}'.format(submit_options, self.account)\n\n launch_cmd = \"qsub {0} {1}\".format(submit_options, channel_script_path)\n retcode, stdout, stderr = self.execute_wait(launch_cmd)\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n if line.strip():\n job_id = line.strip()\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n else:\n message = \"Command '{}' failed with return code {}\".format(launch_cmd, retcode)\n if (stdout is not None) and (stderr is not None):\n message += \"\\nstderr:{}\\nstdout{}\".format(stderr.strip(), stdout.strip())\n logger.error(message)\n\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"qdel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.COMPLETED) # Setting state to exiting\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n\n\nif __name__ == \"__main__\":\n\n print(\"None\")\n", "path": "parsl/providers/torque/torque.py"}], "after_files": [{"content": "import logging\nimport os\nimport time\n\nfrom parsl.channels import LocalChannel\nfrom parsl.launchers import AprunLauncher\nfrom parsl.providers.provider_base import JobState, JobStatus\nfrom parsl.providers.torque.template import template_string\nfrom parsl.providers.cluster_provider import ClusterProvider\nfrom parsl.utils import RepresentationMixin\n\nlogger = logging.getLogger(__name__)\n\n# From the man pages for qstat for PBS/Torque systems\ntranslate_table = {\n 'B': JobState.RUNNING, # This state is returned for running array jobs\n 'R': JobState.RUNNING,\n 'C': JobState.COMPLETED, # Completed after having run\n 'E': JobState.COMPLETED, # Exiting after having run\n 'H': JobState.HELD, # Held\n 'Q': JobState.PENDING, # Queued, and eligible to run\n 'W': JobState.PENDING, # Job is waiting for it's execution time (-a option) to be reached\n 'S': JobState.HELD\n} # Suspended\n\n\nclass TorqueProvider(ClusterProvider, RepresentationMixin):\n \"\"\"Torque Execution Provider\n\n This provider uses sbatch to submit, squeue for status, and scancel to cancel\n jobs. The sbatch script to be used is created from a template file in this\n same module.\n\n Parameters\n ----------\n channel : Channel\n Channel for accessing this provider. Possible channels include\n :class:`~parsl.channels.LocalChannel` (the default),\n :class:`~parsl.channels.SSHChannel`, or\n :class:`~parsl.channels.SSHInteractiveLoginChannel`.\n account : str\n Account the job will be charged against.\n queue : str\n Torque queue to request blocks from.\n nodes_per_block : int\n Nodes to provision per block.\n init_blocks : int\n Number of blocks to provision at the start of the run. Default is 1.\n min_blocks : int\n Minimum number of blocks to maintain. Default is 0.\n max_blocks : int\n Maximum number of blocks to maintain.\n parallelism : float\n Ratio of provisioned task slots to active tasks. A parallelism value of 1 represents aggressive\n scaling where as many resources as possible are used; parallelism close to 0 represents\n the opposite situation in which as few resources as possible (i.e., min_blocks) are used.\n walltime : str\n Walltime requested per block in HH:MM:SS.\n scheduler_options : str\n String to prepend to the #PBS blocks in the submit script to the scheduler.\n worker_init : str\n Command to be run before starting a worker, such as 'module load Anaconda; source activate env'.\n launcher : Launcher\n Launcher for this provider. Possible launchers include\n :class:`~parsl.launchers.AprunLauncher` (the default), or\n :class:`~parsl.launchers.SingleNodeLauncher`,\n\n \"\"\"\n def __init__(self,\n channel=LocalChannel(),\n account=None,\n queue=None,\n scheduler_options='',\n worker_init='',\n nodes_per_block=1,\n init_blocks=1,\n min_blocks=0,\n max_blocks=100,\n parallelism=1,\n launcher=AprunLauncher(),\n walltime=\"00:20:00\",\n cmd_timeout=120):\n label = 'torque'\n super().__init__(label,\n channel,\n nodes_per_block,\n init_blocks,\n min_blocks,\n max_blocks,\n parallelism,\n walltime,\n launcher,\n cmd_timeout=cmd_timeout)\n\n self.account = account\n self.queue = queue\n self.scheduler_options = scheduler_options\n self.worker_init = worker_init\n self.provisioned_blocks = 0\n self.template_string = template_string\n\n # Dictionary that keeps track of jobs, keyed on job_id\n self.resources = {}\n\n def _status(self):\n ''' Internal: Do not call. Returns the status list for a list of job_ids\n\n Args:\n self\n\n Returns:\n [status...] : Status list of all jobs\n '''\n\n job_ids = list(self.resources.keys())\n job_id_list = ' '.join(self.resources.keys())\n\n jobs_missing = list(self.resources.keys())\n\n retcode, stdout, stderr = self.execute_wait(\"qstat {0}\".format(job_id_list))\n for line in stdout.split('\\n'):\n parts = line.split()\n if not parts or parts[0].upper().startswith('JOB') or parts[0].startswith('---'):\n continue\n job_id = parts[0] # likely truncated\n for long_job_id in job_ids:\n if long_job_id.startswith(job_id):\n logger.debug('coerced job_id %s -> %s', job_id, long_job_id)\n job_id = long_job_id\n break\n state = translate_table.get(parts[4], JobState.UNKNOWN)\n self.resources[job_id]['status'] = JobStatus(state)\n jobs_missing.remove(job_id)\n\n # squeue does not report on jobs that are not running. So we are filling in the\n # blanks for missing jobs, we might lose some information about why the jobs failed.\n for missing_job in jobs_missing:\n self.resources[missing_job]['status'] = JobStatus(JobState.COMPLETED)\n\n def submit(self, command, tasks_per_node, job_name=\"parsl.torque\"):\n ''' Submits the command onto an Local Resource Manager job.\n Submit returns an ID that corresponds to the task that was just submitted.\n\n If tasks_per_node < 1 : ! This is illegal. tasks_per_node should be integer\n\n If tasks_per_node == 1:\n A single node is provisioned\n\n If tasks_per_node > 1 :\n tasks_per_node number of nodes are provisioned.\n\n Args:\n - command :(String) Commandline invocation to be made on the remote side.\n - tasks_per_node (int) : command invocations to be launched per node\n\n Kwargs:\n - job_name (String): Name for job, must be unique\n\n Returns:\n - None: At capacity, cannot provision more\n - job_id: (string) Identifier for the job\n\n '''\n\n if self.provisioned_blocks >= self.max_blocks:\n logger.warning(\"[%s] at capacity, cannot add more blocks now\", self.label)\n return None\n\n # Set job name\n job_name = \"parsl.{0}.{1}\".format(job_name, time.time())\n\n # Set script path\n script_path = \"{0}/{1}.submit\".format(self.script_dir, job_name)\n script_path = os.path.abspath(script_path)\n\n logger.debug(\"Requesting nodes_per_block:%s tasks_per_node:%s\", self.nodes_per_block,\n tasks_per_node)\n\n job_config = {}\n # TODO : script_path might need to change to accommodate script dir set via channels\n job_config[\"submit_script_dir\"] = self.channel.script_dir\n job_config[\"nodes\"] = self.nodes_per_block\n job_config[\"task_blocks\"] = self.nodes_per_block * tasks_per_node\n job_config[\"nodes_per_block\"] = self.nodes_per_block\n job_config[\"tasks_per_node\"] = tasks_per_node\n job_config[\"walltime\"] = self.walltime\n job_config[\"scheduler_options\"] = self.scheduler_options\n job_config[\"worker_init\"] = self.worker_init\n job_config[\"user_script\"] = command\n\n # Wrap the command\n job_config[\"user_script\"] = self.launcher(command,\n tasks_per_node,\n self.nodes_per_block)\n\n logger.debug(\"Writing submit script\")\n self._write_submit_script(self.template_string, script_path, job_name, job_config)\n\n channel_script_path = self.channel.push_file(script_path, self.channel.script_dir)\n\n submit_options = ''\n if self.queue is not None:\n submit_options = '{0} -q {1}'.format(submit_options, self.queue)\n if self.account is not None:\n submit_options = '{0} -A {1}'.format(submit_options, self.account)\n\n launch_cmd = \"qsub {0} {1}\".format(submit_options, channel_script_path)\n retcode, stdout, stderr = self.execute_wait(launch_cmd)\n\n job_id = None\n if retcode == 0:\n for line in stdout.split('\\n'):\n if line.strip():\n job_id = line.strip()\n self.resources[job_id] = {'job_id': job_id, 'status': JobStatus(JobState.PENDING)}\n else:\n message = \"Command '{}' failed with return code {}\".format(launch_cmd, retcode)\n if (stdout is not None) and (stderr is not None):\n message += \"\\nstderr:{}\\nstdout{}\".format(stderr.strip(), stdout.strip())\n logger.error(message)\n\n return job_id\n\n def cancel(self, job_ids):\n ''' Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False.\n '''\n\n job_id_list = ' '.join(job_ids)\n retcode, stdout, stderr = self.execute_wait(\"qdel {0}\".format(job_id_list))\n rets = None\n if retcode == 0:\n for jid in job_ids:\n self.resources[jid]['status'] = JobStatus(JobState.COMPLETED) # Setting state to exiting\n rets = [True for i in job_ids]\n else:\n rets = [False for i in job_ids]\n\n return rets\n\n @property\n def status_polling_interval(self):\n return 60\n\n\nif __name__ == \"__main__\":\n\n print(\"None\")\n", "path": "parsl/providers/torque/torque.py"}]} | 3,461 | 141 |
gh_patches_debug_2565 | rasdani/github-patches | git_diff | ibis-project__ibis-1951 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement Interval arithmetic on one or more backends
After subtraction is in from #1489, we'll want to implement this on at least one backend.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/pandas/execution/temporal.py`
Content:
```
1 import datetime
2
3 import numpy as np
4 import pandas as pd
5 from pandas.core.groupby import SeriesGroupBy
6
7 import ibis
8 import ibis.expr.datatypes as dt
9 import ibis.expr.operations as ops
10 from ibis.pandas.core import (
11 date_types,
12 integer_types,
13 numeric_types,
14 timedelta_types,
15 timestamp_types,
16 )
17 from ibis.pandas.dispatch import execute_node, pre_execute
18
19
20 @execute_node.register(ops.Strftime, pd.Timestamp, str)
21 def execute_strftime_timestamp_str(op, data, format_string, **kwargs):
22 return data.strftime(format_string)
23
24
25 @execute_node.register(ops.Strftime, pd.Series, str)
26 def execute_strftime_series_str(op, data, format_string, **kwargs):
27 return data.dt.strftime(format_string)
28
29
30 @execute_node.register(ops.ExtractTemporalField, pd.Timestamp)
31 def execute_extract_timestamp_field_timestamp(op, data, **kwargs):
32 field_name = type(op).__name__.lower().replace('extract', '')
33 return getattr(data, field_name)
34
35
36 @execute_node.register(ops.ExtractMillisecond, pd.Timestamp)
37 def execute_extract_millisecond_timestamp(op, data, **kwargs):
38 return int(data.microsecond // 1000.0)
39
40
41 @execute_node.register(ops.ExtractTemporalField, pd.Series)
42 def execute_extract_timestamp_field_series(op, data, **kwargs):
43 field_name = type(op).__name__.lower().replace('extract', '')
44 return getattr(data.dt, field_name).astype(np.int32)
45
46
47 @execute_node.register(
48 ops.BetweenTime,
49 pd.Series,
50 (pd.Series, str, datetime.time),
51 (pd.Series, str, datetime.time),
52 )
53 def execute_between_time(op, data, lower, upper, **kwargs):
54 indexer = pd.DatetimeIndex(data).indexer_between_time(lower, upper)
55 result = np.zeros(len(data), dtype=np.bool_)
56 result[indexer] = True
57 return pd.Series(result)
58
59
60 @execute_node.register(ops.Date, pd.Series)
61 def execute_timestamp_date(op, data, **kwargs):
62 return data.dt.floor('d')
63
64
65 @execute_node.register((ops.TimestampTruncate, ops.DateTruncate), pd.Series)
66 def execute_timestamp_truncate(op, data, **kwargs):
67 dtype = 'datetime64[{}]'.format(op.unit)
68 array = data.values.astype(dtype)
69 return pd.Series(array, name=data.name)
70
71
72 OFFSET_CLASS = {
73 "Y": pd.offsets.DateOffset,
74 "Q": pd.offsets.DateOffset,
75 "M": pd.offsets.DateOffset,
76 "W": pd.offsets.DateOffset,
77 # all other units are timedelta64s
78 }
79
80
81 @execute_node.register(ops.IntervalFromInteger, pd.Series)
82 def execute_interval_from_integer_series(op, data, **kwargs):
83 unit = op.unit
84 resolution = "{}s".format(op.resolution)
85 cls = OFFSET_CLASS.get(unit, None)
86
87 # fast path for timedelta conversion
88 if cls is None:
89 return data.astype("timedelta64[{}]".format(unit))
90 return data.apply(
91 lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})
92 )
93
94
95 @execute_node.register(ops.IntervalFromInteger, integer_types)
96 def execute_interval_from_integer_integer_types(op, data, **kwargs):
97 unit = op.unit
98 resolution = "{}s".format(op.resolution)
99 cls = OFFSET_CLASS.get(unit, None)
100
101 if cls is None:
102 return pd.Timedelta(data, unit=unit)
103 return cls(**{resolution: data})
104
105
106 @execute_node.register(ops.Cast, pd.Series, dt.Interval)
107 def execute_cast_integer_to_interval_series(op, data, type, **kwargs):
108 to = op.to
109 unit = to.unit
110 resolution = "{}s".format(to.resolution)
111 cls = OFFSET_CLASS.get(unit, None)
112
113 if cls is None:
114 return data.astype("timedelta64[{}]".format(unit))
115 return data.apply(
116 lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})
117 )
118
119
120 @execute_node.register(ops.Cast, integer_types, dt.Interval)
121 def execute_cast_integer_to_interval_integer_types(op, data, type, **kwargs):
122 to = op.to
123 unit = to.unit
124 resolution = "{}s".format(to.resolution)
125 cls = OFFSET_CLASS.get(unit, None)
126
127 if cls is None:
128 return pd.Timedelta(data, unit=unit)
129 return cls(**{resolution: data})
130
131
132 @execute_node.register(ops.TimestampAdd, timestamp_types, timedelta_types)
133 def execute_timestamp_add_datetime_timedelta(op, left, right, **kwargs):
134 return pd.Timestamp(left) + pd.Timedelta(right)
135
136
137 @execute_node.register(ops.TimestampAdd, timestamp_types, pd.Series)
138 def execute_timestamp_add_datetime_series(op, left, right, **kwargs):
139 return pd.Timestamp(left) + right
140
141
142 @execute_node.register(ops.IntervalAdd, timedelta_types, timedelta_types)
143 def execute_interval_add_delta_delta(op, left, right, **kwargs):
144 return op.op(pd.Timedelta(left), pd.Timedelta(right))
145
146
147 @execute_node.register(ops.IntervalAdd, timedelta_types, pd.Series)
148 @execute_node.register(
149 ops.IntervalMultiply, timedelta_types, numeric_types + (pd.Series,)
150 )
151 def execute_interval_add_multiply_delta_series(op, left, right, **kwargs):
152 return op.op(pd.Timedelta(left), right)
153
154
155 @execute_node.register(
156 (ops.TimestampAdd, ops.IntervalAdd), pd.Series, timedelta_types
157 )
158 def execute_timestamp_interval_add_series_delta(op, left, right, **kwargs):
159 return left + pd.Timedelta(right)
160
161
162 @execute_node.register(
163 (ops.TimestampAdd, ops.IntervalAdd), pd.Series, pd.Series
164 )
165 def execute_timestamp_interval_add_series_series(op, left, right, **kwargs):
166 return left + right
167
168
169 @execute_node.register(ops.TimestampSub, timestamp_types, timedelta_types)
170 def execute_timestamp_sub_datetime_timedelta(op, left, right, **kwargs):
171 return pd.Timestamp(left) - pd.Timedelta(right)
172
173
174 @execute_node.register(
175 (ops.TimestampDiff, ops.TimestampSub), timestamp_types, pd.Series
176 )
177 def execute_timestamp_diff_sub_datetime_series(op, left, right, **kwargs):
178 return pd.Timestamp(left) - right
179
180
181 @execute_node.register(ops.TimestampSub, pd.Series, timedelta_types)
182 def execute_timestamp_sub_series_timedelta(op, left, right, **kwargs):
183 return left - pd.Timedelta(right)
184
185
186 @execute_node.register(
187 (ops.TimestampDiff, ops.TimestampSub), pd.Series, pd.Series
188 )
189 def execute_timestamp_diff_sub_series_series(op, left, right, **kwargs):
190 return left - right
191
192
193 @execute_node.register(ops.TimestampDiff, timestamp_types, timestamp_types)
194 def execute_timestamp_diff_datetime_datetime(op, left, right, **kwargs):
195 return pd.Timestamp(left) - pd.Timestamp(right)
196
197
198 @execute_node.register(ops.TimestampDiff, pd.Series, timestamp_types)
199 def execute_timestamp_diff_series_datetime(op, left, right, **kwargs):
200 return left - pd.Timestamp(right)
201
202
203 @execute_node.register(
204 ops.IntervalMultiply, pd.Series, numeric_types + (pd.Series,)
205 )
206 @execute_node.register(
207 ops.IntervalFloorDivide,
208 (pd.Timedelta, pd.Series),
209 numeric_types + (pd.Series,),
210 )
211 def execute_interval_multiply_fdiv_series_numeric(op, left, right, **kwargs):
212 return op.op(left, right)
213
214
215 @execute_node.register(ops.TimestampFromUNIX, (pd.Series,) + integer_types)
216 def execute_timestamp_from_unix(op, data, **kwargs):
217 return pd.to_datetime(data, unit=op.unit)
218
219
220 @pre_execute.register(ops.TimestampNow)
221 @pre_execute.register(ops.TimestampNow, ibis.client.Client)
222 def pre_execute_timestamp_now(op, *args, **kwargs):
223 return {op: pd.Timestamp('now')}
224
225
226 @execute_node.register(ops.DayOfWeekIndex, (str, datetime.date))
227 def execute_day_of_week_index_any(op, value, **kwargs):
228 return pd.Timestamp(value).dayofweek
229
230
231 @execute_node.register(ops.DayOfWeekIndex, pd.Series)
232 def execute_day_of_week_index_series(op, data, **kwargs):
233 return data.dt.dayofweek.astype(np.int16)
234
235
236 @execute_node.register(ops.DayOfWeekIndex, SeriesGroupBy)
237 def execute_day_of_week_index_series_group_by(op, data, **kwargs):
238 groupings = data.grouper.groupings
239 return data.obj.dt.dayofweek.astype(np.int16).groupby(groupings)
240
241
242 def day_name(obj):
243 """Backwards compatible name of day getting function.
244
245 Parameters
246 ----------
247 obj : Union[Series, pd.Timestamp]
248
249 Returns
250 -------
251 str
252 The name of the day corresponding to `obj`
253 """
254 try:
255 return obj.day_name()
256 except AttributeError:
257 return obj.weekday_name
258
259
260 @execute_node.register(ops.DayOfWeekName, (str, datetime.date))
261 def execute_day_of_week_name_any(op, value, **kwargs):
262 return day_name(pd.Timestamp(value))
263
264
265 @execute_node.register(ops.DayOfWeekName, pd.Series)
266 def execute_day_of_week_name_series(op, data, **kwargs):
267 return day_name(data.dt)
268
269
270 @execute_node.register(ops.DayOfWeekName, SeriesGroupBy)
271 def execute_day_of_week_name_series_group_by(op, data, **kwargs):
272 return day_name(data.obj.dt).groupby(data.grouper.groupings)
273
274
275 @execute_node.register(ops.DateSub, date_types, timedelta_types)
276 @execute_node.register((ops.DateDiff, ops.DateSub), date_types, pd.Series)
277 @execute_node.register(ops.DateSub, pd.Series, timedelta_types)
278 @execute_node.register((ops.DateDiff, ops.DateSub), pd.Series, pd.Series)
279 @execute_node.register(ops.DateDiff, date_types, date_types)
280 @execute_node.register(ops.DateDiff, pd.Series, date_types)
281 def execute_date_sub_diff(op, left, right, **kwargs):
282 return left - right
283
284
285 @execute_node.register(ops.DateAdd, pd.Series, timedelta_types)
286 @execute_node.register(ops.DateAdd, timedelta_types, pd.Series)
287 @execute_node.register(ops.DateAdd, pd.Series, pd.Series)
288 @execute_node.register(ops.DateAdd, date_types, timedelta_types)
289 @execute_node.register(ops.DateAdd, timedelta_types, date_types)
290 @execute_node.register(ops.DateAdd, date_types, pd.Series)
291 @execute_node.register(ops.DateAdd, pd.Series, date_types)
292 def execute_date_add(op, left, right, **kwargs):
293 return left + right
294
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ibis/pandas/execution/temporal.py b/ibis/pandas/execution/temporal.py
--- a/ibis/pandas/execution/temporal.py
+++ b/ibis/pandas/execution/temporal.py
@@ -184,7 +184,9 @@
@execute_node.register(
- (ops.TimestampDiff, ops.TimestampSub), pd.Series, pd.Series
+ (ops.TimestampDiff, ops.TimestampSub, ops.IntervalSubtract),
+ pd.Series,
+ pd.Series
)
def execute_timestamp_diff_sub_series_series(op, left, right, **kwargs):
return left - right
| {"golden_diff": "diff --git a/ibis/pandas/execution/temporal.py b/ibis/pandas/execution/temporal.py\n--- a/ibis/pandas/execution/temporal.py\n+++ b/ibis/pandas/execution/temporal.py\n@@ -184,7 +184,9 @@\n \n \n @execute_node.register(\n- (ops.TimestampDiff, ops.TimestampSub), pd.Series, pd.Series\n+ (ops.TimestampDiff, ops.TimestampSub, ops.IntervalSubtract),\n+ pd.Series,\n+ pd.Series\n )\n def execute_timestamp_diff_sub_series_series(op, left, right, **kwargs):\n return left - right\n", "issue": "Implement Interval arithmetic on one or more backends\nAfter subtraction is in from #1489, we'll want to implement this on at least one backend.\n", "before_files": [{"content": "import datetime\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.groupby import SeriesGroupBy\n\nimport ibis\nimport ibis.expr.datatypes as dt\nimport ibis.expr.operations as ops\nfrom ibis.pandas.core import (\n date_types,\n integer_types,\n numeric_types,\n timedelta_types,\n timestamp_types,\n)\nfrom ibis.pandas.dispatch import execute_node, pre_execute\n\n\n@execute_node.register(ops.Strftime, pd.Timestamp, str)\ndef execute_strftime_timestamp_str(op, data, format_string, **kwargs):\n return data.strftime(format_string)\n\n\n@execute_node.register(ops.Strftime, pd.Series, str)\ndef execute_strftime_series_str(op, data, format_string, **kwargs):\n return data.dt.strftime(format_string)\n\n\n@execute_node.register(ops.ExtractTemporalField, pd.Timestamp)\ndef execute_extract_timestamp_field_timestamp(op, data, **kwargs):\n field_name = type(op).__name__.lower().replace('extract', '')\n return getattr(data, field_name)\n\n\n@execute_node.register(ops.ExtractMillisecond, pd.Timestamp)\ndef execute_extract_millisecond_timestamp(op, data, **kwargs):\n return int(data.microsecond // 1000.0)\n\n\n@execute_node.register(ops.ExtractTemporalField, pd.Series)\ndef execute_extract_timestamp_field_series(op, data, **kwargs):\n field_name = type(op).__name__.lower().replace('extract', '')\n return getattr(data.dt, field_name).astype(np.int32)\n\n\n@execute_node.register(\n ops.BetweenTime,\n pd.Series,\n (pd.Series, str, datetime.time),\n (pd.Series, str, datetime.time),\n)\ndef execute_between_time(op, data, lower, upper, **kwargs):\n indexer = pd.DatetimeIndex(data).indexer_between_time(lower, upper)\n result = np.zeros(len(data), dtype=np.bool_)\n result[indexer] = True\n return pd.Series(result)\n\n\n@execute_node.register(ops.Date, pd.Series)\ndef execute_timestamp_date(op, data, **kwargs):\n return data.dt.floor('d')\n\n\n@execute_node.register((ops.TimestampTruncate, ops.DateTruncate), pd.Series)\ndef execute_timestamp_truncate(op, data, **kwargs):\n dtype = 'datetime64[{}]'.format(op.unit)\n array = data.values.astype(dtype)\n return pd.Series(array, name=data.name)\n\n\nOFFSET_CLASS = {\n \"Y\": pd.offsets.DateOffset,\n \"Q\": pd.offsets.DateOffset,\n \"M\": pd.offsets.DateOffset,\n \"W\": pd.offsets.DateOffset,\n # all other units are timedelta64s\n}\n\n\n@execute_node.register(ops.IntervalFromInteger, pd.Series)\ndef execute_interval_from_integer_series(op, data, **kwargs):\n unit = op.unit\n resolution = \"{}s\".format(op.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n # fast path for timedelta conversion\n if cls is None:\n return data.astype(\"timedelta64[{}]\".format(unit))\n return data.apply(\n lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})\n )\n\n\n@execute_node.register(ops.IntervalFromInteger, integer_types)\ndef execute_interval_from_integer_integer_types(op, data, **kwargs):\n unit = op.unit\n resolution = \"{}s\".format(op.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return pd.Timedelta(data, unit=unit)\n return cls(**{resolution: data})\n\n\n@execute_node.register(ops.Cast, pd.Series, dt.Interval)\ndef execute_cast_integer_to_interval_series(op, data, type, **kwargs):\n to = op.to\n unit = to.unit\n resolution = \"{}s\".format(to.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return data.astype(\"timedelta64[{}]\".format(unit))\n return data.apply(\n lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})\n )\n\n\n@execute_node.register(ops.Cast, integer_types, dt.Interval)\ndef execute_cast_integer_to_interval_integer_types(op, data, type, **kwargs):\n to = op.to\n unit = to.unit\n resolution = \"{}s\".format(to.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return pd.Timedelta(data, unit=unit)\n return cls(**{resolution: data})\n\n\n@execute_node.register(ops.TimestampAdd, timestamp_types, timedelta_types)\ndef execute_timestamp_add_datetime_timedelta(op, left, right, **kwargs):\n return pd.Timestamp(left) + pd.Timedelta(right)\n\n\n@execute_node.register(ops.TimestampAdd, timestamp_types, pd.Series)\ndef execute_timestamp_add_datetime_series(op, left, right, **kwargs):\n return pd.Timestamp(left) + right\n\n\n@execute_node.register(ops.IntervalAdd, timedelta_types, timedelta_types)\ndef execute_interval_add_delta_delta(op, left, right, **kwargs):\n return op.op(pd.Timedelta(left), pd.Timedelta(right))\n\n\n@execute_node.register(ops.IntervalAdd, timedelta_types, pd.Series)\n@execute_node.register(\n ops.IntervalMultiply, timedelta_types, numeric_types + (pd.Series,)\n)\ndef execute_interval_add_multiply_delta_series(op, left, right, **kwargs):\n return op.op(pd.Timedelta(left), right)\n\n\n@execute_node.register(\n (ops.TimestampAdd, ops.IntervalAdd), pd.Series, timedelta_types\n)\ndef execute_timestamp_interval_add_series_delta(op, left, right, **kwargs):\n return left + pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampAdd, ops.IntervalAdd), pd.Series, pd.Series\n)\ndef execute_timestamp_interval_add_series_series(op, left, right, **kwargs):\n return left + right\n\n\n@execute_node.register(ops.TimestampSub, timestamp_types, timedelta_types)\ndef execute_timestamp_sub_datetime_timedelta(op, left, right, **kwargs):\n return pd.Timestamp(left) - pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampDiff, ops.TimestampSub), timestamp_types, pd.Series\n)\ndef execute_timestamp_diff_sub_datetime_series(op, left, right, **kwargs):\n return pd.Timestamp(left) - right\n\n\n@execute_node.register(ops.TimestampSub, pd.Series, timedelta_types)\ndef execute_timestamp_sub_series_timedelta(op, left, right, **kwargs):\n return left - pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampDiff, ops.TimestampSub), pd.Series, pd.Series\n)\ndef execute_timestamp_diff_sub_series_series(op, left, right, **kwargs):\n return left - right\n\n\n@execute_node.register(ops.TimestampDiff, timestamp_types, timestamp_types)\ndef execute_timestamp_diff_datetime_datetime(op, left, right, **kwargs):\n return pd.Timestamp(left) - pd.Timestamp(right)\n\n\n@execute_node.register(ops.TimestampDiff, pd.Series, timestamp_types)\ndef execute_timestamp_diff_series_datetime(op, left, right, **kwargs):\n return left - pd.Timestamp(right)\n\n\n@execute_node.register(\n ops.IntervalMultiply, pd.Series, numeric_types + (pd.Series,)\n)\n@execute_node.register(\n ops.IntervalFloorDivide,\n (pd.Timedelta, pd.Series),\n numeric_types + (pd.Series,),\n)\ndef execute_interval_multiply_fdiv_series_numeric(op, left, right, **kwargs):\n return op.op(left, right)\n\n\n@execute_node.register(ops.TimestampFromUNIX, (pd.Series,) + integer_types)\ndef execute_timestamp_from_unix(op, data, **kwargs):\n return pd.to_datetime(data, unit=op.unit)\n\n\n@pre_execute.register(ops.TimestampNow)\n@pre_execute.register(ops.TimestampNow, ibis.client.Client)\ndef pre_execute_timestamp_now(op, *args, **kwargs):\n return {op: pd.Timestamp('now')}\n\n\n@execute_node.register(ops.DayOfWeekIndex, (str, datetime.date))\ndef execute_day_of_week_index_any(op, value, **kwargs):\n return pd.Timestamp(value).dayofweek\n\n\n@execute_node.register(ops.DayOfWeekIndex, pd.Series)\ndef execute_day_of_week_index_series(op, data, **kwargs):\n return data.dt.dayofweek.astype(np.int16)\n\n\n@execute_node.register(ops.DayOfWeekIndex, SeriesGroupBy)\ndef execute_day_of_week_index_series_group_by(op, data, **kwargs):\n groupings = data.grouper.groupings\n return data.obj.dt.dayofweek.astype(np.int16).groupby(groupings)\n\n\ndef day_name(obj):\n \"\"\"Backwards compatible name of day getting function.\n\n Parameters\n ----------\n obj : Union[Series, pd.Timestamp]\n\n Returns\n -------\n str\n The name of the day corresponding to `obj`\n \"\"\"\n try:\n return obj.day_name()\n except AttributeError:\n return obj.weekday_name\n\n\n@execute_node.register(ops.DayOfWeekName, (str, datetime.date))\ndef execute_day_of_week_name_any(op, value, **kwargs):\n return day_name(pd.Timestamp(value))\n\n\n@execute_node.register(ops.DayOfWeekName, pd.Series)\ndef execute_day_of_week_name_series(op, data, **kwargs):\n return day_name(data.dt)\n\n\n@execute_node.register(ops.DayOfWeekName, SeriesGroupBy)\ndef execute_day_of_week_name_series_group_by(op, data, **kwargs):\n return day_name(data.obj.dt).groupby(data.grouper.groupings)\n\n\n@execute_node.register(ops.DateSub, date_types, timedelta_types)\n@execute_node.register((ops.DateDiff, ops.DateSub), date_types, pd.Series)\n@execute_node.register(ops.DateSub, pd.Series, timedelta_types)\n@execute_node.register((ops.DateDiff, ops.DateSub), pd.Series, pd.Series)\n@execute_node.register(ops.DateDiff, date_types, date_types)\n@execute_node.register(ops.DateDiff, pd.Series, date_types)\ndef execute_date_sub_diff(op, left, right, **kwargs):\n return left - right\n\n\n@execute_node.register(ops.DateAdd, pd.Series, timedelta_types)\n@execute_node.register(ops.DateAdd, timedelta_types, pd.Series)\n@execute_node.register(ops.DateAdd, pd.Series, pd.Series)\n@execute_node.register(ops.DateAdd, date_types, timedelta_types)\n@execute_node.register(ops.DateAdd, timedelta_types, date_types)\n@execute_node.register(ops.DateAdd, date_types, pd.Series)\n@execute_node.register(ops.DateAdd, pd.Series, date_types)\ndef execute_date_add(op, left, right, **kwargs):\n return left + right\n", "path": "ibis/pandas/execution/temporal.py"}], "after_files": [{"content": "import datetime\n\nimport numpy as np\nimport pandas as pd\nfrom pandas.core.groupby import SeriesGroupBy\n\nimport ibis\nimport ibis.expr.datatypes as dt\nimport ibis.expr.operations as ops\nfrom ibis.pandas.core import (\n date_types,\n integer_types,\n numeric_types,\n timedelta_types,\n timestamp_types,\n)\nfrom ibis.pandas.dispatch import execute_node, pre_execute\n\n\n@execute_node.register(ops.Strftime, pd.Timestamp, str)\ndef execute_strftime_timestamp_str(op, data, format_string, **kwargs):\n return data.strftime(format_string)\n\n\n@execute_node.register(ops.Strftime, pd.Series, str)\ndef execute_strftime_series_str(op, data, format_string, **kwargs):\n return data.dt.strftime(format_string)\n\n\n@execute_node.register(ops.ExtractTemporalField, pd.Timestamp)\ndef execute_extract_timestamp_field_timestamp(op, data, **kwargs):\n field_name = type(op).__name__.lower().replace('extract', '')\n return getattr(data, field_name)\n\n\n@execute_node.register(ops.ExtractMillisecond, pd.Timestamp)\ndef execute_extract_millisecond_timestamp(op, data, **kwargs):\n return int(data.microsecond // 1000.0)\n\n\n@execute_node.register(ops.ExtractTemporalField, pd.Series)\ndef execute_extract_timestamp_field_series(op, data, **kwargs):\n field_name = type(op).__name__.lower().replace('extract', '')\n return getattr(data.dt, field_name).astype(np.int32)\n\n\n@execute_node.register(\n ops.BetweenTime,\n pd.Series,\n (pd.Series, str, datetime.time),\n (pd.Series, str, datetime.time),\n)\ndef execute_between_time(op, data, lower, upper, **kwargs):\n indexer = pd.DatetimeIndex(data).indexer_between_time(lower, upper)\n result = np.zeros(len(data), dtype=np.bool_)\n result[indexer] = True\n return pd.Series(result)\n\n\n@execute_node.register(ops.Date, pd.Series)\ndef execute_timestamp_date(op, data, **kwargs):\n return data.dt.floor('d')\n\n\n@execute_node.register((ops.TimestampTruncate, ops.DateTruncate), pd.Series)\ndef execute_timestamp_truncate(op, data, **kwargs):\n dtype = 'datetime64[{}]'.format(op.unit)\n array = data.values.astype(dtype)\n return pd.Series(array, name=data.name)\n\n\nOFFSET_CLASS = {\n \"Y\": pd.offsets.DateOffset,\n \"Q\": pd.offsets.DateOffset,\n \"M\": pd.offsets.DateOffset,\n \"W\": pd.offsets.DateOffset,\n # all other units are timedelta64s\n}\n\n\n@execute_node.register(ops.IntervalFromInteger, pd.Series)\ndef execute_interval_from_integer_series(op, data, **kwargs):\n unit = op.unit\n resolution = \"{}s\".format(op.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n # fast path for timedelta conversion\n if cls is None:\n return data.astype(\"timedelta64[{}]\".format(unit))\n return data.apply(\n lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})\n )\n\n\n@execute_node.register(ops.IntervalFromInteger, integer_types)\ndef execute_interval_from_integer_integer_types(op, data, **kwargs):\n unit = op.unit\n resolution = \"{}s\".format(op.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return pd.Timedelta(data, unit=unit)\n return cls(**{resolution: data})\n\n\n@execute_node.register(ops.Cast, pd.Series, dt.Interval)\ndef execute_cast_integer_to_interval_series(op, data, type, **kwargs):\n to = op.to\n unit = to.unit\n resolution = \"{}s\".format(to.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return data.astype(\"timedelta64[{}]\".format(unit))\n return data.apply(\n lambda n, cls=cls, resolution=resolution: cls(**{resolution: n})\n )\n\n\n@execute_node.register(ops.Cast, integer_types, dt.Interval)\ndef execute_cast_integer_to_interval_integer_types(op, data, type, **kwargs):\n to = op.to\n unit = to.unit\n resolution = \"{}s\".format(to.resolution)\n cls = OFFSET_CLASS.get(unit, None)\n\n if cls is None:\n return pd.Timedelta(data, unit=unit)\n return cls(**{resolution: data})\n\n\n@execute_node.register(ops.TimestampAdd, timestamp_types, timedelta_types)\ndef execute_timestamp_add_datetime_timedelta(op, left, right, **kwargs):\n return pd.Timestamp(left) + pd.Timedelta(right)\n\n\n@execute_node.register(ops.TimestampAdd, timestamp_types, pd.Series)\ndef execute_timestamp_add_datetime_series(op, left, right, **kwargs):\n return pd.Timestamp(left) + right\n\n\n@execute_node.register(ops.IntervalAdd, timedelta_types, timedelta_types)\ndef execute_interval_add_delta_delta(op, left, right, **kwargs):\n return op.op(pd.Timedelta(left), pd.Timedelta(right))\n\n\n@execute_node.register(ops.IntervalAdd, timedelta_types, pd.Series)\n@execute_node.register(\n ops.IntervalMultiply, timedelta_types, numeric_types + (pd.Series,)\n)\ndef execute_interval_add_multiply_delta_series(op, left, right, **kwargs):\n return op.op(pd.Timedelta(left), right)\n\n\n@execute_node.register(\n (ops.TimestampAdd, ops.IntervalAdd), pd.Series, timedelta_types\n)\ndef execute_timestamp_interval_add_series_delta(op, left, right, **kwargs):\n return left + pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampAdd, ops.IntervalAdd), pd.Series, pd.Series\n)\ndef execute_timestamp_interval_add_series_series(op, left, right, **kwargs):\n return left + right\n\n\n@execute_node.register(ops.TimestampSub, timestamp_types, timedelta_types)\ndef execute_timestamp_sub_datetime_timedelta(op, left, right, **kwargs):\n return pd.Timestamp(left) - pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampDiff, ops.TimestampSub), timestamp_types, pd.Series\n)\ndef execute_timestamp_diff_sub_datetime_series(op, left, right, **kwargs):\n return pd.Timestamp(left) - right\n\n\n@execute_node.register(ops.TimestampSub, pd.Series, timedelta_types)\ndef execute_timestamp_sub_series_timedelta(op, left, right, **kwargs):\n return left - pd.Timedelta(right)\n\n\n@execute_node.register(\n (ops.TimestampDiff, ops.TimestampSub, ops.IntervalSubtract),\n pd.Series,\n pd.Series\n)\ndef execute_timestamp_diff_sub_series_series(op, left, right, **kwargs):\n return left - right\n\n\n@execute_node.register(ops.TimestampDiff, timestamp_types, timestamp_types)\ndef execute_timestamp_diff_datetime_datetime(op, left, right, **kwargs):\n return pd.Timestamp(left) - pd.Timestamp(right)\n\n\n@execute_node.register(ops.TimestampDiff, pd.Series, timestamp_types)\ndef execute_timestamp_diff_series_datetime(op, left, right, **kwargs):\n return left - pd.Timestamp(right)\n\n\n@execute_node.register(\n ops.IntervalMultiply, pd.Series, numeric_types + (pd.Series,)\n)\n@execute_node.register(\n ops.IntervalFloorDivide,\n (pd.Timedelta, pd.Series),\n numeric_types + (pd.Series,),\n)\ndef execute_interval_multiply_fdiv_series_numeric(op, left, right, **kwargs):\n return op.op(left, right)\n\n\n@execute_node.register(ops.TimestampFromUNIX, (pd.Series,) + integer_types)\ndef execute_timestamp_from_unix(op, data, **kwargs):\n return pd.to_datetime(data, unit=op.unit)\n\n\n@pre_execute.register(ops.TimestampNow)\n@pre_execute.register(ops.TimestampNow, ibis.client.Client)\ndef pre_execute_timestamp_now(op, *args, **kwargs):\n return {op: pd.Timestamp('now')}\n\n\n@execute_node.register(ops.DayOfWeekIndex, (str, datetime.date))\ndef execute_day_of_week_index_any(op, value, **kwargs):\n return pd.Timestamp(value).dayofweek\n\n\n@execute_node.register(ops.DayOfWeekIndex, pd.Series)\ndef execute_day_of_week_index_series(op, data, **kwargs):\n return data.dt.dayofweek.astype(np.int16)\n\n\n@execute_node.register(ops.DayOfWeekIndex, SeriesGroupBy)\ndef execute_day_of_week_index_series_group_by(op, data, **kwargs):\n groupings = data.grouper.groupings\n return data.obj.dt.dayofweek.astype(np.int16).groupby(groupings)\n\n\ndef day_name(obj):\n \"\"\"Backwards compatible name of day getting function.\n\n Parameters\n ----------\n obj : Union[Series, pd.Timestamp]\n\n Returns\n -------\n str\n The name of the day corresponding to `obj`\n \"\"\"\n try:\n return obj.day_name()\n except AttributeError:\n return obj.weekday_name\n\n\n@execute_node.register(ops.DayOfWeekName, (str, datetime.date))\ndef execute_day_of_week_name_any(op, value, **kwargs):\n return day_name(pd.Timestamp(value))\n\n\n@execute_node.register(ops.DayOfWeekName, pd.Series)\ndef execute_day_of_week_name_series(op, data, **kwargs):\n return day_name(data.dt)\n\n\n@execute_node.register(ops.DayOfWeekName, SeriesGroupBy)\ndef execute_day_of_week_name_series_group_by(op, data, **kwargs):\n return day_name(data.obj.dt).groupby(data.grouper.groupings)\n\n\n@execute_node.register(ops.DateSub, date_types, timedelta_types)\n@execute_node.register((ops.DateDiff, ops.DateSub), date_types, pd.Series)\n@execute_node.register(ops.DateSub, pd.Series, timedelta_types)\n@execute_node.register((ops.DateDiff, ops.DateSub), pd.Series, pd.Series)\n@execute_node.register(ops.DateDiff, date_types, date_types)\n@execute_node.register(ops.DateDiff, pd.Series, date_types)\ndef execute_date_sub_diff(op, left, right, **kwargs):\n return left - right\n\n\n@execute_node.register(ops.DateAdd, pd.Series, timedelta_types)\n@execute_node.register(ops.DateAdd, timedelta_types, pd.Series)\n@execute_node.register(ops.DateAdd, pd.Series, pd.Series)\n@execute_node.register(ops.DateAdd, date_types, timedelta_types)\n@execute_node.register(ops.DateAdd, timedelta_types, date_types)\n@execute_node.register(ops.DateAdd, date_types, pd.Series)\n@execute_node.register(ops.DateAdd, pd.Series, date_types)\ndef execute_date_add(op, left, right, **kwargs):\n return left + right\n", "path": "ibis/pandas/execution/temporal.py"}]} | 3,369 | 138 |
gh_patches_debug_41854 | rasdani/github-patches | git_diff | iterative__dvc-3020 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
import: Handle non-DVC Git repositories
After https://github.com/iterative/dvc/pull/2889, `dvc import` can also import files that are tracked by Git but not DVC. DVC still requires that they come from a DVC repository rather than any Git repository, although there is no longer need for that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/external_repo.py`
Content:
```
1 import os
2 import tempfile
3 from contextlib import contextmanager
4 from distutils.dir_util import copy_tree
5
6 from funcy import retry
7
8 from dvc.config import NoRemoteError, ConfigError
9 from dvc.exceptions import NoRemoteInExternalRepoError
10 from dvc.remote import RemoteConfig
11 from dvc.exceptions import NoOutputInExternalRepoError
12 from dvc.exceptions import OutputNotFoundError
13 from dvc.utils.fs import remove
14
15
16 REPO_CACHE = {}
17
18
19 @contextmanager
20 def external_repo(url=None, rev=None, rev_lock=None, cache_dir=None):
21 from dvc.repo import Repo
22
23 path = _external_repo(url=url, rev=rev_lock or rev, cache_dir=cache_dir)
24 repo = Repo(path)
25 try:
26 yield repo
27 except NoRemoteError:
28 raise NoRemoteInExternalRepoError(url)
29 except OutputNotFoundError as exc:
30 if exc.repo is repo:
31 raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url)
32 raise
33 repo.close()
34
35
36 def _external_repo(url=None, rev=None, cache_dir=None):
37 from dvc.config import Config
38 from dvc.cache import CacheConfig
39 from dvc.repo import Repo
40
41 key = (url, rev, cache_dir)
42 if key in REPO_CACHE:
43 return REPO_CACHE[key]
44
45 new_path = tempfile.mkdtemp("dvc-erepo")
46
47 # Copy and adjust existing clone
48 if (url, None, None) in REPO_CACHE:
49 old_path = REPO_CACHE[url, None, None]
50
51 # This one unlike shutil.copytree() works with an existing dir
52 copy_tree(old_path, new_path)
53 else:
54 # Create a new clone
55 _clone_repo(url, new_path)
56
57 # Save clean clone dir so that we will have access to a default branch
58 clean_clone_path = tempfile.mkdtemp("dvc-erepo")
59 copy_tree(new_path, clean_clone_path)
60 REPO_CACHE[url, None, None] = clean_clone_path
61
62 # Adjust new clone/copy to fit rev and cache_dir
63
64 # Checkout needs to be done first because current branch might not be
65 # DVC repository
66 if rev is not None:
67 _git_checkout(new_path, rev)
68
69 repo = Repo(new_path)
70 try:
71 # check if the URL is local and no default remote is present
72 # add default remote pointing to the original repo's cache location
73 if os.path.isdir(url):
74 rconfig = RemoteConfig(repo.config)
75 if not _default_remote_set(rconfig):
76 original_repo = Repo(url)
77 try:
78 rconfig.add(
79 "auto-generated-upstream",
80 original_repo.cache.local.cache_dir,
81 default=True,
82 level=Config.LEVEL_LOCAL,
83 )
84 finally:
85 original_repo.close()
86
87 if cache_dir is not None:
88 cache_config = CacheConfig(repo.config)
89 cache_config.set_dir(cache_dir, level=Config.LEVEL_LOCAL)
90 finally:
91 # Need to close/reopen repo to force config reread
92 repo.close()
93
94 REPO_CACHE[key] = new_path
95 return new_path
96
97
98 def _git_checkout(repo_path, revision):
99 from dvc.scm import Git
100
101 git = Git(repo_path)
102 try:
103 git.checkout(revision)
104 finally:
105 git.close()
106
107
108 def clean_repos():
109 # Outside code should not see cache while we are removing
110 repo_paths = list(REPO_CACHE.values())
111 REPO_CACHE.clear()
112
113 for path in repo_paths:
114 _remove(path)
115
116
117 def _remove(path):
118 if os.name == "nt":
119 # git.exe may hang for a while not permitting to remove temp dir
120 os_retry = retry(5, errors=OSError, timeout=0.1)
121 os_retry(remove)(path)
122 else:
123 remove(path)
124
125
126 def _clone_repo(url, path):
127 from dvc.scm.git import Git
128
129 git = Git.clone(url, path)
130 git.close()
131
132
133 def _default_remote_set(rconfig):
134 """
135 Checks if default remote config is present.
136 Args:
137 rconfig: a remote config
138
139 Returns:
140 True if the default remote config is set, else False
141 """
142 try:
143 rconfig.get_default()
144 return True
145 except ConfigError:
146 return False
147
```
Path: `dvc/dependency/repo.py`
Content:
```
1 import copy
2 import os
3 from contextlib import contextmanager
4 from dvc.utils.compat import FileNotFoundError
5
6 from funcy import merge
7
8 from .local import DependencyLOCAL
9 from dvc.external_repo import external_repo
10 from dvc.exceptions import OutputNotFoundError
11 from dvc.exceptions import PathMissingError
12 from dvc.utils.fs import fs_copy
13
14
15 class DependencyREPO(DependencyLOCAL):
16 PARAM_REPO = "repo"
17 PARAM_URL = "url"
18 PARAM_REV = "rev"
19 PARAM_REV_LOCK = "rev_lock"
20
21 REPO_SCHEMA = {PARAM_URL: str, PARAM_REV: str, PARAM_REV_LOCK: str}
22
23 def __init__(self, def_repo, stage, *args, **kwargs):
24 self.def_repo = def_repo
25 super(DependencyREPO, self).__init__(stage, *args, **kwargs)
26
27 def _parse_path(self, remote, path):
28 return None
29
30 @property
31 def is_in_repo(self):
32 return False
33
34 @property
35 def repo_pair(self):
36 d = self.def_repo
37 return d[self.PARAM_URL], d[self.PARAM_REV_LOCK] or d[self.PARAM_REV]
38
39 def __str__(self):
40 return "{} ({})".format(self.def_path, self.def_repo[self.PARAM_URL])
41
42 @contextmanager
43 def _make_repo(self, **overrides):
44 with external_repo(**merge(self.def_repo, overrides)) as repo:
45 yield repo
46
47 def status(self):
48 with self._make_repo() as repo:
49 current = repo.find_out_by_relpath(self.def_path).info
50
51 with self._make_repo(rev_lock=None) as repo:
52 updated = repo.find_out_by_relpath(self.def_path).info
53
54 if current != updated:
55 return {str(self): "update available"}
56
57 return {}
58
59 def save(self):
60 pass
61
62 def dumpd(self):
63 return {self.PARAM_PATH: self.def_path, self.PARAM_REPO: self.def_repo}
64
65 def fetch(self):
66 with self._make_repo(
67 cache_dir=self.repo.cache.local.cache_dir
68 ) as repo:
69 self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()
70
71 out = repo.find_out_by_relpath(self.def_path)
72 with repo.state:
73 repo.cloud.pull(out.get_used_cache())
74
75 return out
76
77 @staticmethod
78 def _is_git_file(repo, path):
79 if not os.path.isabs(path):
80 try:
81 output = repo.find_out_by_relpath(path)
82 if not output.use_cache:
83 return True
84 except OutputNotFoundError:
85 return True
86 return False
87
88 def _copy_if_git_file(self, to_path):
89 src_path = self.def_path
90 with self._make_repo(
91 cache_dir=self.repo.cache.local.cache_dir
92 ) as repo:
93 if not self._is_git_file(repo, src_path):
94 return False
95
96 src_full_path = os.path.join(repo.root_dir, src_path)
97 dst_full_path = os.path.abspath(to_path)
98 fs_copy(src_full_path, dst_full_path)
99 return True
100
101 def download(self, to):
102 try:
103 if self._copy_if_git_file(to.fspath):
104 return
105
106 out = self.fetch()
107 to.info = copy.copy(out.info)
108 to.checkout()
109 except (FileNotFoundError):
110 raise PathMissingError(
111 self.def_path, self.def_repo[self.PARAM_URL]
112 )
113
114 def update(self):
115 with self._make_repo(rev_lock=None) as repo:
116 self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/dependency/repo.py b/dvc/dependency/repo.py
--- a/dvc/dependency/repo.py
+++ b/dvc/dependency/repo.py
@@ -6,7 +6,9 @@
from funcy import merge
from .local import DependencyLOCAL
+from dvc.external_repo import cached_clone
from dvc.external_repo import external_repo
+from dvc.exceptions import NotDvcRepoError
from dvc.exceptions import OutputNotFoundError
from dvc.exceptions import PathMissingError
from dvc.utils.fs import fs_copy
@@ -75,27 +77,35 @@
return out
@staticmethod
- def _is_git_file(repo, path):
- if not os.path.isabs(path):
- try:
- output = repo.find_out_by_relpath(path)
- if not output.use_cache:
- return True
- except OutputNotFoundError:
- return True
- return False
+ def _is_git_file(repo_dir, path):
+ from dvc.repo import Repo
+
+ if os.path.isabs(path):
+ return False
+
+ try:
+ repo = Repo(repo_dir)
+ except NotDvcRepoError:
+ return True
+
+ try:
+ output = repo.find_out_by_relpath(path)
+ return not output.use_cache
+ except OutputNotFoundError:
+ return True
+ finally:
+ repo.close()
def _copy_if_git_file(self, to_path):
src_path = self.def_path
- with self._make_repo(
- cache_dir=self.repo.cache.local.cache_dir
- ) as repo:
- if not self._is_git_file(repo, src_path):
- return False
+ repo_dir = cached_clone(**self.def_repo)
+
+ if not self._is_git_file(repo_dir, src_path):
+ return False
- src_full_path = os.path.join(repo.root_dir, src_path)
- dst_full_path = os.path.abspath(to_path)
- fs_copy(src_full_path, dst_full_path)
+ src_full_path = os.path.join(repo_dir, src_path)
+ dst_full_path = os.path.abspath(to_path)
+ fs_copy(src_full_path, dst_full_path)
return True
def download(self, to):
diff --git a/dvc/external_repo.py b/dvc/external_repo.py
--- a/dvc/external_repo.py
+++ b/dvc/external_repo.py
@@ -33,18 +33,20 @@
repo.close()
-def _external_repo(url=None, rev=None, cache_dir=None):
- from dvc.config import Config
- from dvc.cache import CacheConfig
- from dvc.repo import Repo
+def cached_clone(url, rev=None, **_ignored_kwargs):
+ """Clone an external git repo to a temporary directory.
- key = (url, rev, cache_dir)
- if key in REPO_CACHE:
- return REPO_CACHE[key]
+ Returns the path to a local temporary directory with the specified
+ revision checked out.
+
+ Uses the REPO_CACHE to avoid accessing the remote server again if
+ cloning from the same URL twice in the same session.
+
+ """
new_path = tempfile.mkdtemp("dvc-erepo")
- # Copy and adjust existing clone
+ # Copy and adjust existing clean clone
if (url, None, None) in REPO_CACHE:
old_path = REPO_CACHE[url, None, None]
@@ -59,13 +61,24 @@
copy_tree(new_path, clean_clone_path)
REPO_CACHE[url, None, None] = clean_clone_path
- # Adjust new clone/copy to fit rev and cache_dir
-
- # Checkout needs to be done first because current branch might not be
- # DVC repository
+ # Check out the specified revision
if rev is not None:
_git_checkout(new_path, rev)
+ return new_path
+
+
+def _external_repo(url=None, rev=None, cache_dir=None):
+ from dvc.config import Config
+ from dvc.cache import CacheConfig
+ from dvc.repo import Repo
+
+ key = (url, rev, cache_dir)
+ if key in REPO_CACHE:
+ return REPO_CACHE[key]
+
+ new_path = cached_clone(url, rev=rev)
+
repo = Repo(new_path)
try:
# check if the URL is local and no default remote is present
| {"golden_diff": "diff --git a/dvc/dependency/repo.py b/dvc/dependency/repo.py\n--- a/dvc/dependency/repo.py\n+++ b/dvc/dependency/repo.py\n@@ -6,7 +6,9 @@\n from funcy import merge\n \n from .local import DependencyLOCAL\n+from dvc.external_repo import cached_clone\n from dvc.external_repo import external_repo\n+from dvc.exceptions import NotDvcRepoError\n from dvc.exceptions import OutputNotFoundError\n from dvc.exceptions import PathMissingError\n from dvc.utils.fs import fs_copy\n@@ -75,27 +77,35 @@\n return out\n \n @staticmethod\n- def _is_git_file(repo, path):\n- if not os.path.isabs(path):\n- try:\n- output = repo.find_out_by_relpath(path)\n- if not output.use_cache:\n- return True\n- except OutputNotFoundError:\n- return True\n- return False\n+ def _is_git_file(repo_dir, path):\n+ from dvc.repo import Repo\n+\n+ if os.path.isabs(path):\n+ return False\n+\n+ try:\n+ repo = Repo(repo_dir)\n+ except NotDvcRepoError:\n+ return True\n+\n+ try:\n+ output = repo.find_out_by_relpath(path)\n+ return not output.use_cache\n+ except OutputNotFoundError:\n+ return True\n+ finally:\n+ repo.close()\n \n def _copy_if_git_file(self, to_path):\n src_path = self.def_path\n- with self._make_repo(\n- cache_dir=self.repo.cache.local.cache_dir\n- ) as repo:\n- if not self._is_git_file(repo, src_path):\n- return False\n+ repo_dir = cached_clone(**self.def_repo)\n+\n+ if not self._is_git_file(repo_dir, src_path):\n+ return False\n \n- src_full_path = os.path.join(repo.root_dir, src_path)\n- dst_full_path = os.path.abspath(to_path)\n- fs_copy(src_full_path, dst_full_path)\n+ src_full_path = os.path.join(repo_dir, src_path)\n+ dst_full_path = os.path.abspath(to_path)\n+ fs_copy(src_full_path, dst_full_path)\n return True\n \n def download(self, to):\ndiff --git a/dvc/external_repo.py b/dvc/external_repo.py\n--- a/dvc/external_repo.py\n+++ b/dvc/external_repo.py\n@@ -33,18 +33,20 @@\n repo.close()\n \n \n-def _external_repo(url=None, rev=None, cache_dir=None):\n- from dvc.config import Config\n- from dvc.cache import CacheConfig\n- from dvc.repo import Repo\n+def cached_clone(url, rev=None, **_ignored_kwargs):\n+ \"\"\"Clone an external git repo to a temporary directory.\n \n- key = (url, rev, cache_dir)\n- if key in REPO_CACHE:\n- return REPO_CACHE[key]\n+ Returns the path to a local temporary directory with the specified\n+ revision checked out.\n+\n+ Uses the REPO_CACHE to avoid accessing the remote server again if\n+ cloning from the same URL twice in the same session.\n+\n+ \"\"\"\n \n new_path = tempfile.mkdtemp(\"dvc-erepo\")\n \n- # Copy and adjust existing clone\n+ # Copy and adjust existing clean clone\n if (url, None, None) in REPO_CACHE:\n old_path = REPO_CACHE[url, None, None]\n \n@@ -59,13 +61,24 @@\n copy_tree(new_path, clean_clone_path)\n REPO_CACHE[url, None, None] = clean_clone_path\n \n- # Adjust new clone/copy to fit rev and cache_dir\n-\n- # Checkout needs to be done first because current branch might not be\n- # DVC repository\n+ # Check out the specified revision\n if rev is not None:\n _git_checkout(new_path, rev)\n \n+ return new_path\n+\n+\n+def _external_repo(url=None, rev=None, cache_dir=None):\n+ from dvc.config import Config\n+ from dvc.cache import CacheConfig\n+ from dvc.repo import Repo\n+\n+ key = (url, rev, cache_dir)\n+ if key in REPO_CACHE:\n+ return REPO_CACHE[key]\n+\n+ new_path = cached_clone(url, rev=rev)\n+\n repo = Repo(new_path)\n try:\n # check if the URL is local and no default remote is present\n", "issue": "import: Handle non-DVC Git repositories \nAfter https://github.com/iterative/dvc/pull/2889, `dvc import` can also import files that are tracked by Git but not DVC. DVC still requires that they come from a DVC repository rather than any Git repository, although there is no longer need for that.\n", "before_files": [{"content": "import os\nimport tempfile\nfrom contextlib import contextmanager\nfrom distutils.dir_util import copy_tree\n\nfrom funcy import retry\n\nfrom dvc.config import NoRemoteError, ConfigError\nfrom dvc.exceptions import NoRemoteInExternalRepoError\nfrom dvc.remote import RemoteConfig\nfrom dvc.exceptions import NoOutputInExternalRepoError\nfrom dvc.exceptions import OutputNotFoundError\nfrom dvc.utils.fs import remove\n\n\nREPO_CACHE = {}\n\n\n@contextmanager\ndef external_repo(url=None, rev=None, rev_lock=None, cache_dir=None):\n from dvc.repo import Repo\n\n path = _external_repo(url=url, rev=rev_lock or rev, cache_dir=cache_dir)\n repo = Repo(path)\n try:\n yield repo\n except NoRemoteError:\n raise NoRemoteInExternalRepoError(url)\n except OutputNotFoundError as exc:\n if exc.repo is repo:\n raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url)\n raise\n repo.close()\n\n\ndef _external_repo(url=None, rev=None, cache_dir=None):\n from dvc.config import Config\n from dvc.cache import CacheConfig\n from dvc.repo import Repo\n\n key = (url, rev, cache_dir)\n if key in REPO_CACHE:\n return REPO_CACHE[key]\n\n new_path = tempfile.mkdtemp(\"dvc-erepo\")\n\n # Copy and adjust existing clone\n if (url, None, None) in REPO_CACHE:\n old_path = REPO_CACHE[url, None, None]\n\n # This one unlike shutil.copytree() works with an existing dir\n copy_tree(old_path, new_path)\n else:\n # Create a new clone\n _clone_repo(url, new_path)\n\n # Save clean clone dir so that we will have access to a default branch\n clean_clone_path = tempfile.mkdtemp(\"dvc-erepo\")\n copy_tree(new_path, clean_clone_path)\n REPO_CACHE[url, None, None] = clean_clone_path\n\n # Adjust new clone/copy to fit rev and cache_dir\n\n # Checkout needs to be done first because current branch might not be\n # DVC repository\n if rev is not None:\n _git_checkout(new_path, rev)\n\n repo = Repo(new_path)\n try:\n # check if the URL is local and no default remote is present\n # add default remote pointing to the original repo's cache location\n if os.path.isdir(url):\n rconfig = RemoteConfig(repo.config)\n if not _default_remote_set(rconfig):\n original_repo = Repo(url)\n try:\n rconfig.add(\n \"auto-generated-upstream\",\n original_repo.cache.local.cache_dir,\n default=True,\n level=Config.LEVEL_LOCAL,\n )\n finally:\n original_repo.close()\n\n if cache_dir is not None:\n cache_config = CacheConfig(repo.config)\n cache_config.set_dir(cache_dir, level=Config.LEVEL_LOCAL)\n finally:\n # Need to close/reopen repo to force config reread\n repo.close()\n\n REPO_CACHE[key] = new_path\n return new_path\n\n\ndef _git_checkout(repo_path, revision):\n from dvc.scm import Git\n\n git = Git(repo_path)\n try:\n git.checkout(revision)\n finally:\n git.close()\n\n\ndef clean_repos():\n # Outside code should not see cache while we are removing\n repo_paths = list(REPO_CACHE.values())\n REPO_CACHE.clear()\n\n for path in repo_paths:\n _remove(path)\n\n\ndef _remove(path):\n if os.name == \"nt\":\n # git.exe may hang for a while not permitting to remove temp dir\n os_retry = retry(5, errors=OSError, timeout=0.1)\n os_retry(remove)(path)\n else:\n remove(path)\n\n\ndef _clone_repo(url, path):\n from dvc.scm.git import Git\n\n git = Git.clone(url, path)\n git.close()\n\n\ndef _default_remote_set(rconfig):\n \"\"\"\n Checks if default remote config is present.\n Args:\n rconfig: a remote config\n\n Returns:\n True if the default remote config is set, else False\n \"\"\"\n try:\n rconfig.get_default()\n return True\n except ConfigError:\n return False\n", "path": "dvc/external_repo.py"}, {"content": "import copy\nimport os\nfrom contextlib import contextmanager\nfrom dvc.utils.compat import FileNotFoundError\n\nfrom funcy import merge\n\nfrom .local import DependencyLOCAL\nfrom dvc.external_repo import external_repo\nfrom dvc.exceptions import OutputNotFoundError\nfrom dvc.exceptions import PathMissingError\nfrom dvc.utils.fs import fs_copy\n\n\nclass DependencyREPO(DependencyLOCAL):\n PARAM_REPO = \"repo\"\n PARAM_URL = \"url\"\n PARAM_REV = \"rev\"\n PARAM_REV_LOCK = \"rev_lock\"\n\n REPO_SCHEMA = {PARAM_URL: str, PARAM_REV: str, PARAM_REV_LOCK: str}\n\n def __init__(self, def_repo, stage, *args, **kwargs):\n self.def_repo = def_repo\n super(DependencyREPO, self).__init__(stage, *args, **kwargs)\n\n def _parse_path(self, remote, path):\n return None\n\n @property\n def is_in_repo(self):\n return False\n\n @property\n def repo_pair(self):\n d = self.def_repo\n return d[self.PARAM_URL], d[self.PARAM_REV_LOCK] or d[self.PARAM_REV]\n\n def __str__(self):\n return \"{} ({})\".format(self.def_path, self.def_repo[self.PARAM_URL])\n\n @contextmanager\n def _make_repo(self, **overrides):\n with external_repo(**merge(self.def_repo, overrides)) as repo:\n yield repo\n\n def status(self):\n with self._make_repo() as repo:\n current = repo.find_out_by_relpath(self.def_path).info\n\n with self._make_repo(rev_lock=None) as repo:\n updated = repo.find_out_by_relpath(self.def_path).info\n\n if current != updated:\n return {str(self): \"update available\"}\n\n return {}\n\n def save(self):\n pass\n\n def dumpd(self):\n return {self.PARAM_PATH: self.def_path, self.PARAM_REPO: self.def_repo}\n\n def fetch(self):\n with self._make_repo(\n cache_dir=self.repo.cache.local.cache_dir\n ) as repo:\n self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()\n\n out = repo.find_out_by_relpath(self.def_path)\n with repo.state:\n repo.cloud.pull(out.get_used_cache())\n\n return out\n\n @staticmethod\n def _is_git_file(repo, path):\n if not os.path.isabs(path):\n try:\n output = repo.find_out_by_relpath(path)\n if not output.use_cache:\n return True\n except OutputNotFoundError:\n return True\n return False\n\n def _copy_if_git_file(self, to_path):\n src_path = self.def_path\n with self._make_repo(\n cache_dir=self.repo.cache.local.cache_dir\n ) as repo:\n if not self._is_git_file(repo, src_path):\n return False\n\n src_full_path = os.path.join(repo.root_dir, src_path)\n dst_full_path = os.path.abspath(to_path)\n fs_copy(src_full_path, dst_full_path)\n return True\n\n def download(self, to):\n try:\n if self._copy_if_git_file(to.fspath):\n return\n\n out = self.fetch()\n to.info = copy.copy(out.info)\n to.checkout()\n except (FileNotFoundError):\n raise PathMissingError(\n self.def_path, self.def_repo[self.PARAM_URL]\n )\n\n def update(self):\n with self._make_repo(rev_lock=None) as repo:\n self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()\n", "path": "dvc/dependency/repo.py"}], "after_files": [{"content": "import os\nimport tempfile\nfrom contextlib import contextmanager\nfrom distutils.dir_util import copy_tree\n\nfrom funcy import retry\n\nfrom dvc.config import NoRemoteError, ConfigError\nfrom dvc.exceptions import NoRemoteInExternalRepoError\nfrom dvc.remote import RemoteConfig\nfrom dvc.exceptions import NoOutputInExternalRepoError\nfrom dvc.exceptions import OutputNotFoundError\nfrom dvc.utils.fs import remove\n\n\nREPO_CACHE = {}\n\n\n@contextmanager\ndef external_repo(url=None, rev=None, rev_lock=None, cache_dir=None):\n from dvc.repo import Repo\n\n path = _external_repo(url=url, rev=rev_lock or rev, cache_dir=cache_dir)\n repo = Repo(path)\n try:\n yield repo\n except NoRemoteError:\n raise NoRemoteInExternalRepoError(url)\n except OutputNotFoundError as exc:\n if exc.repo is repo:\n raise NoOutputInExternalRepoError(exc.output, repo.root_dir, url)\n raise\n repo.close()\n\n\ndef cached_clone(url, rev=None, **_ignored_kwargs):\n \"\"\"Clone an external git repo to a temporary directory.\n\n Returns the path to a local temporary directory with the specified\n revision checked out.\n\n Uses the REPO_CACHE to avoid accessing the remote server again if\n cloning from the same URL twice in the same session.\n\n \"\"\"\n\n new_path = tempfile.mkdtemp(\"dvc-erepo\")\n\n # Copy and adjust existing clean clone\n if (url, None, None) in REPO_CACHE:\n old_path = REPO_CACHE[url, None, None]\n\n # This one unlike shutil.copytree() works with an existing dir\n copy_tree(old_path, new_path)\n else:\n # Create a new clone\n _clone_repo(url, new_path)\n\n # Save clean clone dir so that we will have access to a default branch\n clean_clone_path = tempfile.mkdtemp(\"dvc-erepo\")\n copy_tree(new_path, clean_clone_path)\n REPO_CACHE[url, None, None] = clean_clone_path\n\n # Check out the specified revision\n if rev is not None:\n _git_checkout(new_path, rev)\n\n return new_path\n\n\ndef _external_repo(url=None, rev=None, cache_dir=None):\n from dvc.config import Config\n from dvc.cache import CacheConfig\n from dvc.repo import Repo\n\n key = (url, rev, cache_dir)\n if key in REPO_CACHE:\n return REPO_CACHE[key]\n\n new_path = cached_clone(url, rev=rev)\n\n repo = Repo(new_path)\n try:\n # check if the URL is local and no default remote is present\n # add default remote pointing to the original repo's cache location\n if os.path.isdir(url):\n rconfig = RemoteConfig(repo.config)\n if not _default_remote_set(rconfig):\n original_repo = Repo(url)\n try:\n rconfig.add(\n \"auto-generated-upstream\",\n original_repo.cache.local.cache_dir,\n default=True,\n level=Config.LEVEL_LOCAL,\n )\n finally:\n original_repo.close()\n\n if cache_dir is not None:\n cache_config = CacheConfig(repo.config)\n cache_config.set_dir(cache_dir, level=Config.LEVEL_LOCAL)\n finally:\n # Need to close/reopen repo to force config reread\n repo.close()\n\n REPO_CACHE[key] = new_path\n return new_path\n\n\ndef _git_checkout(repo_path, revision):\n from dvc.scm import Git\n\n git = Git(repo_path)\n try:\n git.checkout(revision)\n finally:\n git.close()\n\n\ndef clean_repos():\n # Outside code should not see cache while we are removing\n repo_paths = list(REPO_CACHE.values())\n REPO_CACHE.clear()\n\n for path in repo_paths:\n _remove(path)\n\n\ndef _remove(path):\n if os.name == \"nt\":\n # git.exe may hang for a while not permitting to remove temp dir\n os_retry = retry(5, errors=OSError, timeout=0.1)\n os_retry(remove)(path)\n else:\n remove(path)\n\n\ndef _clone_repo(url, path):\n from dvc.scm.git import Git\n\n git = Git.clone(url, path)\n git.close()\n\n\ndef _default_remote_set(rconfig):\n \"\"\"\n Checks if default remote config is present.\n Args:\n rconfig: a remote config\n\n Returns:\n True if the default remote config is set, else False\n \"\"\"\n try:\n rconfig.get_default()\n return True\n except ConfigError:\n return False\n", "path": "dvc/external_repo.py"}, {"content": "import copy\nimport os\nfrom contextlib import contextmanager\nfrom dvc.utils.compat import FileNotFoundError\n\nfrom funcy import merge\n\nfrom .local import DependencyLOCAL\nfrom dvc.external_repo import cached_clone\nfrom dvc.external_repo import external_repo\nfrom dvc.exceptions import NotDvcRepoError\nfrom dvc.exceptions import OutputNotFoundError\nfrom dvc.exceptions import PathMissingError\nfrom dvc.utils.fs import fs_copy\n\n\nclass DependencyREPO(DependencyLOCAL):\n PARAM_REPO = \"repo\"\n PARAM_URL = \"url\"\n PARAM_REV = \"rev\"\n PARAM_REV_LOCK = \"rev_lock\"\n\n REPO_SCHEMA = {PARAM_URL: str, PARAM_REV: str, PARAM_REV_LOCK: str}\n\n def __init__(self, def_repo, stage, *args, **kwargs):\n self.def_repo = def_repo\n super(DependencyREPO, self).__init__(stage, *args, **kwargs)\n\n def _parse_path(self, remote, path):\n return None\n\n @property\n def is_in_repo(self):\n return False\n\n @property\n def repo_pair(self):\n d = self.def_repo\n return d[self.PARAM_URL], d[self.PARAM_REV_LOCK] or d[self.PARAM_REV]\n\n def __str__(self):\n return \"{} ({})\".format(self.def_path, self.def_repo[self.PARAM_URL])\n\n @contextmanager\n def _make_repo(self, **overrides):\n with external_repo(**merge(self.def_repo, overrides)) as repo:\n yield repo\n\n def status(self):\n with self._make_repo() as repo:\n current = repo.find_out_by_relpath(self.def_path).info\n\n with self._make_repo(rev_lock=None) as repo:\n updated = repo.find_out_by_relpath(self.def_path).info\n\n if current != updated:\n return {str(self): \"update available\"}\n\n return {}\n\n def save(self):\n pass\n\n def dumpd(self):\n return {self.PARAM_PATH: self.def_path, self.PARAM_REPO: self.def_repo}\n\n def fetch(self):\n with self._make_repo(\n cache_dir=self.repo.cache.local.cache_dir\n ) as repo:\n self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()\n\n out = repo.find_out_by_relpath(self.def_path)\n with repo.state:\n repo.cloud.pull(out.get_used_cache())\n\n return out\n\n @staticmethod\n def _is_git_file(repo_dir, path):\n from dvc.repo import Repo\n\n if os.path.isabs(path):\n return False\n\n try:\n repo = Repo(repo_dir)\n except NotDvcRepoError:\n return True\n\n try:\n output = repo.find_out_by_relpath(path)\n return not output.use_cache\n except OutputNotFoundError:\n return True\n finally:\n repo.close()\n\n def _copy_if_git_file(self, to_path):\n src_path = self.def_path\n repo_dir = cached_clone(**self.def_repo)\n\n if not self._is_git_file(repo_dir, src_path):\n return False\n\n src_full_path = os.path.join(repo_dir, src_path)\n dst_full_path = os.path.abspath(to_path)\n fs_copy(src_full_path, dst_full_path)\n return True\n\n def download(self, to):\n try:\n if self._copy_if_git_file(to.fspath):\n return\n\n out = self.fetch()\n to.info = copy.copy(out.info)\n to.checkout()\n except (FileNotFoundError):\n raise PathMissingError(\n self.def_path, self.def_repo[self.PARAM_URL]\n )\n\n def update(self):\n with self._make_repo(rev_lock=None) as repo:\n self.def_repo[self.PARAM_REV_LOCK] = repo.scm.get_rev()\n", "path": "dvc/dependency/repo.py"}]} | 2,652 | 1,001 |
gh_patches_debug_607 | rasdani/github-patches | git_diff | pex-tool__pex-1446 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.49
On the docket:
+ [ ] Avoid re-using old ~/.pex/code/ caches. #1444
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.48"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.48"
+__version__ = "2.1.49"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.48\"\n+__version__ = \"2.1.49\"\n", "issue": "Release 2.1.49\nOn the docket:\r\n+ [ ] Avoid re-using old ~/.pex/code/ caches. #1444 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.48\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.49\"\n", "path": "pex/version.py"}]} | 342 | 96 |
gh_patches_debug_20339 | rasdani/github-patches | git_diff | docker__docker-py-1581 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
images.build fails when using tag argument.
When using the `tag` kwarg on `client.images.build`, it will always throw a Build Error.
The api.build output looks like this
```
['{"stream":"Step 1/4 : FROM scratch\\n"}\r\n',
'{"stream":" ---\\u003e \\n"}\r\n',
'{"stream":"Step 2/4 : LABEL com.nvidia.volumes.needed \\"nvidia_driver\\"\\n"}\r\n',
'{"stream":" ---\\u003e Using cache\\n"}\r\n',
'{"stream":" ---\\u003e 36ec3942c5f5\\n"}\r\n',
'{"stream":"Step 3/4 : SHELL /usr/local/nvidia/bin/nvidia-smi\\n"}\r\n',
'{"stream":" ---\\u003e Running in f875d54529eb\\n"}\r\n',
'{"stream":" ---\\u003e b750cf87aed6\\n"}\r\n',
'{"stream":"Step 4/4 : CMD -L\\n"}\r\n',
'{"stream":" ---\\u003e Running in f3f9e21b8171\\n"}\r\n',
'{"stream":" ---\\u003e 61c46a80da73\\n"}\r\n',
'{"stream":"Successfully built 61c46a80da73\\n"}\r\n',
'{"stream":"Successfully tagged 5b3bc129-d296-4b7d-872c-bc7117d4f327:latest\\n"}\r\n']
```
The problem can be tracked down to [here](https://github.com/docker/docker-py/blob/2.2.1-release/docker/models/images.py#L164-L167). The code is assuming the "Successfully built" comes last, however in the tag case, this is not true.
Two ideas are to either
1. Search the last two lines
2. Or in case that is not enough, search all the lines for "Successfully built"
I'm using docker server 17.05.0-ce-rc1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/models/images.py`
Content:
```
1 import re
2
3 import six
4
5 from ..api import APIClient
6 from ..errors import BuildError
7 from ..utils.json_stream import json_stream
8 from .resource import Collection, Model
9
10
11 class Image(Model):
12 """
13 An image on the server.
14 """
15 def __repr__(self):
16 return "<%s: '%s'>" % (self.__class__.__name__, "', '".join(self.tags))
17
18 @property
19 def labels(self):
20 """
21 The labels of an image as dictionary.
22 """
23 result = self.attrs['Config'].get('Labels')
24 return result or {}
25
26 @property
27 def short_id(self):
28 """
29 The ID of the image truncated to 10 characters, plus the ``sha256:``
30 prefix.
31 """
32 if self.id.startswith('sha256:'):
33 return self.id[:17]
34 return self.id[:10]
35
36 @property
37 def tags(self):
38 """
39 The image's tags.
40 """
41 tags = self.attrs.get('RepoTags')
42 if tags is None:
43 tags = []
44 return [tag for tag in tags if tag != '<none>:<none>']
45
46 def history(self):
47 """
48 Show the history of an image.
49
50 Returns:
51 (str): The history of the image.
52
53 Raises:
54 :py:class:`docker.errors.APIError`
55 If the server returns an error.
56 """
57 return self.client.api.history(self.id)
58
59 def save(self):
60 """
61 Get a tarball of an image. Similar to the ``docker save`` command.
62
63 Returns:
64 (urllib3.response.HTTPResponse object): The response from the
65 daemon.
66
67 Raises:
68 :py:class:`docker.errors.APIError`
69 If the server returns an error.
70
71 Example:
72
73 >>> image = cli.images.get("fedora:latest")
74 >>> resp = image.save()
75 >>> f = open('/tmp/fedora-latest.tar', 'w')
76 >>> for chunk in resp.stream():
77 >>> f.write(chunk)
78 >>> f.close()
79 """
80 return self.client.api.get_image(self.id)
81
82 def tag(self, repository, tag=None, **kwargs):
83 """
84 Tag this image into a repository. Similar to the ``docker tag``
85 command.
86
87 Args:
88 repository (str): The repository to set for the tag
89 tag (str): The tag name
90 force (bool): Force
91
92 Raises:
93 :py:class:`docker.errors.APIError`
94 If the server returns an error.
95
96 Returns:
97 (bool): ``True`` if successful
98 """
99 self.client.api.tag(self.id, repository, tag=tag, **kwargs)
100
101
102 class ImageCollection(Collection):
103 model = Image
104
105 def build(self, **kwargs):
106 """
107 Build an image and return it. Similar to the ``docker build``
108 command. Either ``path`` or ``fileobj`` must be set.
109
110 If you have a tar file for the Docker build context (including a
111 Dockerfile) already, pass a readable file-like object to ``fileobj``
112 and also pass ``custom_context=True``. If the stream is compressed
113 also, set ``encoding`` to the correct value (e.g ``gzip``).
114
115 If you want to get the raw output of the build, use the
116 :py:meth:`~docker.api.build.BuildApiMixin.build` method in the
117 low-level API.
118
119 Args:
120 path (str): Path to the directory containing the Dockerfile
121 fileobj: A file object to use as the Dockerfile. (Or a file-like
122 object)
123 tag (str): A tag to add to the final image
124 quiet (bool): Whether to return the status
125 nocache (bool): Don't use the cache when set to ``True``
126 rm (bool): Remove intermediate containers. The ``docker build``
127 command now defaults to ``--rm=true``, but we have kept the old
128 default of `False` to preserve backward compatibility
129 stream (bool): *Deprecated for API version > 1.8 (always True)*.
130 Return a blocking generator you can iterate over to retrieve
131 build output as it happens
132 timeout (int): HTTP timeout
133 custom_context (bool): Optional if using ``fileobj``
134 encoding (str): The encoding for a stream. Set to ``gzip`` for
135 compressing
136 pull (bool): Downloads any updates to the FROM image in Dockerfiles
137 forcerm (bool): Always remove intermediate containers, even after
138 unsuccessful builds
139 dockerfile (str): path within the build context to the Dockerfile
140 buildargs (dict): A dictionary of build arguments
141 container_limits (dict): A dictionary of limits applied to each
142 container created by the build process. Valid keys:
143
144 - memory (int): set memory limit for build
145 - memswap (int): Total memory (memory + swap), -1 to disable
146 swap
147 - cpushares (int): CPU shares (relative weight)
148 - cpusetcpus (str): CPUs in which to allow execution, e.g.,
149 ``"0-3"``, ``"0,1"``
150 decode (bool): If set to ``True``, the returned stream will be
151 decoded into dicts on the fly. Default ``False``.
152 cache_from (list): A list of images used for build cache
153 resolution.
154
155 Returns:
156 (:py:class:`Image`): The built image.
157
158 Raises:
159 :py:class:`docker.errors.BuildError`
160 If there is an error during the build.
161 :py:class:`docker.errors.APIError`
162 If the server returns any other error.
163 ``TypeError``
164 If neither ``path`` nor ``fileobj`` is specified.
165 """
166 resp = self.client.api.build(**kwargs)
167 if isinstance(resp, six.string_types):
168 return self.get(resp)
169 events = list(json_stream(resp))
170 if not events:
171 return BuildError('Unknown')
172 event = events[-1]
173 if 'stream' in event:
174 match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',
175 event.get('stream', ''))
176 if match:
177 image_id = match.group(2)
178 return self.get(image_id)
179
180 raise BuildError(event.get('error') or event)
181
182 def get(self, name):
183 """
184 Gets an image.
185
186 Args:
187 name (str): The name of the image.
188
189 Returns:
190 (:py:class:`Image`): The image.
191
192 Raises:
193 :py:class:`docker.errors.ImageNotFound` If the image does not
194 exist.
195 :py:class:`docker.errors.APIError`
196 If the server returns an error.
197 """
198 return self.prepare_model(self.client.api.inspect_image(name))
199
200 def list(self, name=None, all=False, filters=None):
201 """
202 List images on the server.
203
204 Args:
205 name (str): Only show images belonging to the repository ``name``
206 all (bool): Show intermediate image layers. By default, these are
207 filtered out.
208 filters (dict): Filters to be processed on the image list.
209 Available filters:
210 - ``dangling`` (bool)
211 - ``label`` (str): format either ``key`` or ``key=value``
212
213 Returns:
214 (list of :py:class:`Image`): The images.
215
216 Raises:
217 :py:class:`docker.errors.APIError`
218 If the server returns an error.
219 """
220 resp = self.client.api.images(name=name, all=all, filters=filters)
221 return [self.prepare_model(r) for r in resp]
222
223 def load(self, data):
224 """
225 Load an image that was previously saved using
226 :py:meth:`~docker.models.images.Image.save` (or ``docker save``).
227 Similar to ``docker load``.
228
229 Args:
230 data (binary): Image data to be loaded.
231
232 Raises:
233 :py:class:`docker.errors.APIError`
234 If the server returns an error.
235 """
236 return self.client.api.load_image(data)
237
238 def pull(self, name, **kwargs):
239 """
240 Pull an image of the given name and return it. Similar to the
241 ``docker pull`` command.
242
243 If you want to get the raw pull output, use the
244 :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the
245 low-level API.
246
247 Args:
248 repository (str): The repository to pull
249 tag (str): The tag to pull
250 insecure_registry (bool): Use an insecure registry
251 auth_config (dict): Override the credentials that
252 :py:meth:`~docker.client.DockerClient.login` has set for
253 this request. ``auth_config`` should contain the ``username``
254 and ``password`` keys to be valid.
255
256 Returns:
257 (:py:class:`Image`): The image that has been pulled.
258
259 Raises:
260 :py:class:`docker.errors.APIError`
261 If the server returns an error.
262
263 Example:
264
265 >>> image = client.images.pull('busybox')
266 """
267 self.client.api.pull(name, **kwargs)
268 return self.get(name)
269
270 def push(self, repository, tag=None, **kwargs):
271 return self.client.api.push(repository, tag=tag, **kwargs)
272 push.__doc__ = APIClient.push.__doc__
273
274 def remove(self, *args, **kwargs):
275 self.client.api.remove_image(*args, **kwargs)
276 remove.__doc__ = APIClient.remove_image.__doc__
277
278 def search(self, *args, **kwargs):
279 return self.client.api.search(*args, **kwargs)
280 search.__doc__ = APIClient.search.__doc__
281
282 def prune(self, filters=None):
283 return self.client.api.prune_images(filters=filters)
284 prune.__doc__ = APIClient.prune_images.__doc__
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/models/images.py b/docker/models/images.py
--- a/docker/models/images.py
+++ b/docker/models/images.py
@@ -166,18 +166,18 @@
resp = self.client.api.build(**kwargs)
if isinstance(resp, six.string_types):
return self.get(resp)
- events = list(json_stream(resp))
- if not events:
- return BuildError('Unknown')
- event = events[-1]
- if 'stream' in event:
- match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',
- event.get('stream', ''))
- if match:
- image_id = match.group(2)
- return self.get(image_id)
-
- raise BuildError(event.get('error') or event)
+ for chunk in json_stream(resp):
+ if 'error' in chunk:
+ raise BuildError(chunk['error'])
+ break
+ if 'stream' in chunk:
+ match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',
+ chunk['stream'])
+ if match:
+ image_id = match.group(2)
+ return self.get(image_id)
+
+ return BuildError('Unknown')
def get(self, name):
"""
| {"golden_diff": "diff --git a/docker/models/images.py b/docker/models/images.py\n--- a/docker/models/images.py\n+++ b/docker/models/images.py\n@@ -166,18 +166,18 @@\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n- events = list(json_stream(resp))\n- if not events:\n- return BuildError('Unknown')\n- event = events[-1]\n- if 'stream' in event:\n- match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',\n- event.get('stream', ''))\n- if match:\n- image_id = match.group(2)\n- return self.get(image_id)\n-\n- raise BuildError(event.get('error') or event)\n+ for chunk in json_stream(resp):\n+ if 'error' in chunk:\n+ raise BuildError(chunk['error'])\n+ break\n+ if 'stream' in chunk:\n+ match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',\n+ chunk['stream'])\n+ if match:\n+ image_id = match.group(2)\n+ return self.get(image_id)\n+\n+ return BuildError('Unknown')\n \n def get(self, name):\n \"\"\"\n", "issue": "images.build fails when using tag argument.\nWhen using the `tag` kwarg on `client.images.build`, it will always throw a Build Error.\r\n\r\nThe api.build output looks like this\r\n\r\n```\r\n['{\"stream\":\"Step 1/4 : FROM scratch\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e \\\\n\"}\\r\\n',\r\n '{\"stream\":\"Step 2/4 : LABEL com.nvidia.volumes.needed \\\\\"nvidia_driver\\\\\"\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e Using cache\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e 36ec3942c5f5\\\\n\"}\\r\\n',\r\n '{\"stream\":\"Step 3/4 : SHELL /usr/local/nvidia/bin/nvidia-smi\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e Running in f875d54529eb\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e b750cf87aed6\\\\n\"}\\r\\n',\r\n '{\"stream\":\"Step 4/4 : CMD -L\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e Running in f3f9e21b8171\\\\n\"}\\r\\n',\r\n '{\"stream\":\" ---\\\\u003e 61c46a80da73\\\\n\"}\\r\\n',\r\n '{\"stream\":\"Successfully built 61c46a80da73\\\\n\"}\\r\\n',\r\n '{\"stream\":\"Successfully tagged 5b3bc129-d296-4b7d-872c-bc7117d4f327:latest\\\\n\"}\\r\\n']\r\n```\r\n\r\nThe problem can be tracked down to [here](https://github.com/docker/docker-py/blob/2.2.1-release/docker/models/images.py#L164-L167). The code is assuming the \"Successfully built\" comes last, however in the tag case, this is not true.\r\n\r\nTwo ideas are to either \r\n\r\n1. Search the last two lines\r\n2. Or in case that is not enough, search all the lines for \"Successfully built\"\r\n\r\n\r\nI'm using docker server 17.05.0-ce-rc1\r\n\n", "before_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n events = list(json_stream(resp))\n if not events:\n return BuildError('Unknown')\n event = events[-1]\n if 'stream' in event:\n match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',\n event.get('stream', ''))\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n\n raise BuildError(event.get('error') or event)\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound` If the image does not\n exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, **kwargs)\n return self.get(name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}], "after_files": [{"content": "import re\n\nimport six\n\nfrom ..api import APIClient\nfrom ..errors import BuildError\nfrom ..utils.json_stream import json_stream\nfrom .resource import Collection, Model\n\n\nclass Image(Model):\n \"\"\"\n An image on the server.\n \"\"\"\n def __repr__(self):\n return \"<%s: '%s'>\" % (self.__class__.__name__, \"', '\".join(self.tags))\n\n @property\n def labels(self):\n \"\"\"\n The labels of an image as dictionary.\n \"\"\"\n result = self.attrs['Config'].get('Labels')\n return result or {}\n\n @property\n def short_id(self):\n \"\"\"\n The ID of the image truncated to 10 characters, plus the ``sha256:``\n prefix.\n \"\"\"\n if self.id.startswith('sha256:'):\n return self.id[:17]\n return self.id[:10]\n\n @property\n def tags(self):\n \"\"\"\n The image's tags.\n \"\"\"\n tags = self.attrs.get('RepoTags')\n if tags is None:\n tags = []\n return [tag for tag in tags if tag != '<none>:<none>']\n\n def history(self):\n \"\"\"\n Show the history of an image.\n\n Returns:\n (str): The history of the image.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.history(self.id)\n\n def save(self):\n \"\"\"\n Get a tarball of an image. Similar to the ``docker save`` command.\n\n Returns:\n (urllib3.response.HTTPResponse object): The response from the\n daemon.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = cli.images.get(\"fedora:latest\")\n >>> resp = image.save()\n >>> f = open('/tmp/fedora-latest.tar', 'w')\n >>> for chunk in resp.stream():\n >>> f.write(chunk)\n >>> f.close()\n \"\"\"\n return self.client.api.get_image(self.id)\n\n def tag(self, repository, tag=None, **kwargs):\n \"\"\"\n Tag this image into a repository. Similar to the ``docker tag``\n command.\n\n Args:\n repository (str): The repository to set for the tag\n tag (str): The tag name\n force (bool): Force\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Returns:\n (bool): ``True`` if successful\n \"\"\"\n self.client.api.tag(self.id, repository, tag=tag, **kwargs)\n\n\nclass ImageCollection(Collection):\n model = Image\n\n def build(self, **kwargs):\n \"\"\"\n Build an image and return it. Similar to the ``docker build``\n command. Either ``path`` or ``fileobj`` must be set.\n\n If you have a tar file for the Docker build context (including a\n Dockerfile) already, pass a readable file-like object to ``fileobj``\n and also pass ``custom_context=True``. If the stream is compressed\n also, set ``encoding`` to the correct value (e.g ``gzip``).\n\n If you want to get the raw output of the build, use the\n :py:meth:`~docker.api.build.BuildApiMixin.build` method in the\n low-level API.\n\n Args:\n path (str): Path to the directory containing the Dockerfile\n fileobj: A file object to use as the Dockerfile. (Or a file-like\n object)\n tag (str): A tag to add to the final image\n quiet (bool): Whether to return the status\n nocache (bool): Don't use the cache when set to ``True``\n rm (bool): Remove intermediate containers. The ``docker build``\n command now defaults to ``--rm=true``, but we have kept the old\n default of `False` to preserve backward compatibility\n stream (bool): *Deprecated for API version > 1.8 (always True)*.\n Return a blocking generator you can iterate over to retrieve\n build output as it happens\n timeout (int): HTTP timeout\n custom_context (bool): Optional if using ``fileobj``\n encoding (str): The encoding for a stream. Set to ``gzip`` for\n compressing\n pull (bool): Downloads any updates to the FROM image in Dockerfiles\n forcerm (bool): Always remove intermediate containers, even after\n unsuccessful builds\n dockerfile (str): path within the build context to the Dockerfile\n buildargs (dict): A dictionary of build arguments\n container_limits (dict): A dictionary of limits applied to each\n container created by the build process. Valid keys:\n\n - memory (int): set memory limit for build\n - memswap (int): Total memory (memory + swap), -1 to disable\n swap\n - cpushares (int): CPU shares (relative weight)\n - cpusetcpus (str): CPUs in which to allow execution, e.g.,\n ``\"0-3\"``, ``\"0,1\"``\n decode (bool): If set to ``True``, the returned stream will be\n decoded into dicts on the fly. Default ``False``.\n cache_from (list): A list of images used for build cache\n resolution.\n\n Returns:\n (:py:class:`Image`): The built image.\n\n Raises:\n :py:class:`docker.errors.BuildError`\n If there is an error during the build.\n :py:class:`docker.errors.APIError`\n If the server returns any other error.\n ``TypeError``\n If neither ``path`` nor ``fileobj`` is specified.\n \"\"\"\n resp = self.client.api.build(**kwargs)\n if isinstance(resp, six.string_types):\n return self.get(resp)\n for chunk in json_stream(resp):\n if 'error' in chunk:\n raise BuildError(chunk['error'])\n break\n if 'stream' in chunk:\n match = re.search(r'(Successfully built |sha256:)([0-9a-f]+)',\n chunk['stream'])\n if match:\n image_id = match.group(2)\n return self.get(image_id)\n\n return BuildError('Unknown')\n\n def get(self, name):\n \"\"\"\n Gets an image.\n\n Args:\n name (str): The name of the image.\n\n Returns:\n (:py:class:`Image`): The image.\n\n Raises:\n :py:class:`docker.errors.ImageNotFound` If the image does not\n exist.\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.prepare_model(self.client.api.inspect_image(name))\n\n def list(self, name=None, all=False, filters=None):\n \"\"\"\n List images on the server.\n\n Args:\n name (str): Only show images belonging to the repository ``name``\n all (bool): Show intermediate image layers. By default, these are\n filtered out.\n filters (dict): Filters to be processed on the image list.\n Available filters:\n - ``dangling`` (bool)\n - ``label`` (str): format either ``key`` or ``key=value``\n\n Returns:\n (list of :py:class:`Image`): The images.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n resp = self.client.api.images(name=name, all=all, filters=filters)\n return [self.prepare_model(r) for r in resp]\n\n def load(self, data):\n \"\"\"\n Load an image that was previously saved using\n :py:meth:`~docker.models.images.Image.save` (or ``docker save``).\n Similar to ``docker load``.\n\n Args:\n data (binary): Image data to be loaded.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self.client.api.load_image(data)\n\n def pull(self, name, **kwargs):\n \"\"\"\n Pull an image of the given name and return it. Similar to the\n ``docker pull`` command.\n\n If you want to get the raw pull output, use the\n :py:meth:`~docker.api.image.ImageApiMixin.pull` method in the\n low-level API.\n\n Args:\n repository (str): The repository to pull\n tag (str): The tag to pull\n insecure_registry (bool): Use an insecure registry\n auth_config (dict): Override the credentials that\n :py:meth:`~docker.client.DockerClient.login` has set for\n this request. ``auth_config`` should contain the ``username``\n and ``password`` keys to be valid.\n\n Returns:\n (:py:class:`Image`): The image that has been pulled.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> image = client.images.pull('busybox')\n \"\"\"\n self.client.api.pull(name, **kwargs)\n return self.get(name)\n\n def push(self, repository, tag=None, **kwargs):\n return self.client.api.push(repository, tag=tag, **kwargs)\n push.__doc__ = APIClient.push.__doc__\n\n def remove(self, *args, **kwargs):\n self.client.api.remove_image(*args, **kwargs)\n remove.__doc__ = APIClient.remove_image.__doc__\n\n def search(self, *args, **kwargs):\n return self.client.api.search(*args, **kwargs)\n search.__doc__ = APIClient.search.__doc__\n\n def prune(self, filters=None):\n return self.client.api.prune_images(filters=filters)\n prune.__doc__ = APIClient.prune_images.__doc__\n", "path": "docker/models/images.py"}]} | 3,644 | 293 |
gh_patches_debug_6873 | rasdani/github-patches | git_diff | DDMAL__CantusDB-454 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
required fields
On OldCantus, to create a source you need both a manuscript ID and a siglum (fields marked with asterisk) otherwise it won't create the source.
NewCantus has no asterisks on these fields, and was quite happy to let me make sources with no siglum (though it does tell me to fill out an ID field if I try to submit without it.)
On the chant level, Folio and Sequence seem to be required fields (they are not on OldCantus!) but are not marked as such with asterisks, either.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/models/source.py`
Content:
```
1 from django.db import models
2 from main_app.models import BaseModel, Segment
3 from django.contrib.auth import get_user_model
4
5
6 class Source(BaseModel):
7 cursus_choices = [("Monastic", "Monastic"), ("Secular", "Secular")]
8 source_status_choices = [
9 (
10 "Editing process (not all the fields have been proofread)",
11 "Editing process (not all the fields have been proofread)",
12 ),
13 ("Published / Complete", "Published / Complete"),
14 ("Published / Proofread pending", "Published / Proofread pending"),
15 ("Unpublished / Editing process", "Unpublished / Editing process"),
16 ("Unpublished / Indexing process", "Unpublished / Indexing process"),
17 ("Unpublished / Proofread pending", "Unpublished / Proofread pending"),
18 ("Unpublished / Proofreading process", "Unpublished / Proofreading process"),
19 ("Unpublished / No indexing activity", "Unpublished / No indexing activity"),
20 ]
21
22 # The old Cantus uses two fields to jointly control the access to sources.
23 # Here in the new Cantus, we only use one field, and there are two levels: published and unpublished.
24 # Published sources are available to the public.
25 # Unpublished sources are hidden from the list and cannot be accessed by URL until the user logs in.
26 published = models.BooleanField(blank=False, null=False, default=False)
27
28 title = models.CharField(
29 max_length=255,
30 help_text="Full Manuscript Identification (City, Archive, Shelf-mark)",
31 )
32 # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark
33 # it is a human-readable ID for a source
34 siglum = models.CharField(
35 max_length=63,
36 null=True,
37 blank=True,
38 help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
39 )
40 # the RISM siglum uniquely identifies a library or holding institution
41 rism_siglum = models.ForeignKey(
42 "RismSiglum", on_delete=models.PROTECT, null=True, blank=True,
43 )
44 provenance = models.ForeignKey(
45 "Provenance",
46 on_delete=models.PROTECT,
47 help_text="If the origin is unknown, select a location where the source was "
48 "used later in its lifetime and provide details in the "
49 '"Provenance notes" field.',
50 null=True,
51 blank=True,
52 related_name="sources",
53 )
54 provenance_notes = models.TextField(
55 blank=True,
56 null=True,
57 help_text="More exact indication of the provenance (if necessary)",
58 )
59 full_source = models.BooleanField(blank=True, null=True)
60 date = models.CharField(
61 blank=True,
62 null=True,
63 max_length=63,
64 help_text='Date of the manuscript (e.g. "1200s", "1300-1350", etc.)',
65 )
66 century = models.ManyToManyField("Century", related_name="sources", blank=True)
67 notation = models.ManyToManyField("Notation", related_name="sources", blank=True)
68 cursus = models.CharField(
69 blank=True, null=True, choices=cursus_choices, max_length=63
70 )
71 current_editors = models.ManyToManyField(get_user_model(), related_name="sources_user_can_edit", blank=True)
72
73 inventoried_by = models.ManyToManyField(
74 get_user_model(), related_name="inventoried_sources", blank=True
75 )
76 full_text_entered_by = models.ManyToManyField(
77 get_user_model(), related_name="entered_full_text_for_sources", blank=True
78 )
79 melodies_entered_by = models.ManyToManyField(
80 get_user_model(), related_name="entered_melody_for_sources", blank=True
81 )
82 proofreaders = models.ManyToManyField(get_user_model(), related_name="proofread_sources", blank=True)
83 other_editors = models.ManyToManyField(get_user_model(), related_name="edited_sources", blank=True)
84
85
86 segment = models.ForeignKey(
87 "Segment", on_delete=models.PROTECT, blank=True, null=True
88 )
89 source_status = models.CharField(blank=True, null=True, choices=source_status_choices, max_length=255)
90 complete_inventory = models.BooleanField(blank=True, null=True)
91 summary = models.TextField(blank=True, null=True)
92 liturgical_occasions = models.TextField(blank=True, null=True)
93 description = models.TextField(blank=True, null=True)
94 selected_bibliography = models.TextField(blank=True, null=True)
95 image_link = models.URLField(
96 blank=True,
97 null=True,
98 help_text='HTTP link to the image gallery of the source.',
99 )
100 indexing_notes = models.TextField(blank=True, null=True)
101 indexing_date = models.TextField(blank=True, null=True)
102 json_info = models.JSONField(blank=True, null=True)
103 fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)
104 dact_id = models.CharField(max_length=15, blank=True, null=True)
105
106 # number_of_chants and number_of_melodies are used for rendering the source-list page (perhaps among other places)
107 # they are automatically recalculated in main_app.signals.update_source_chant_count and
108 # main_app.signals.update_source_melody_count every time a chant or sequence is saved or deleted
109 number_of_chants = models.IntegerField(blank=True, null=True)
110 number_of_melodies = models.IntegerField(blank=True, null=True)
111
112 def __str__(self):
113 string = '[{s}] {t} ({i})'.format(s=self.rism_siglum, t=self.title, i=self.id)
114 return string
115
116 def save(self, *args, **kwargs):
117 # when creating a source, assign it to "CANTUS Database" segment by default
118 if not self.segment:
119 cantus_db_segment = Segment.objects.get(name="CANTUS Database")
120 self.segment = cantus_db_segment
121 super().save(*args, **kwargs)
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py
--- a/django/cantusdb_project/main_app/models/source.py
+++ b/django/cantusdb_project/main_app/models/source.py
@@ -33,8 +33,8 @@
# it is a human-readable ID for a source
siglum = models.CharField(
max_length=63,
- null=True,
- blank=True,
+ null=False,
+ blank=False,
help_text="RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).",
)
# the RISM siglum uniquely identifies a library or holding institution
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/models/source.py b/django/cantusdb_project/main_app/models/source.py\n--- a/django/cantusdb_project/main_app/models/source.py\n+++ b/django/cantusdb_project/main_app/models/source.py\n@@ -33,8 +33,8 @@\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n- null=True, \n- blank=True,\n+ null=False, \n+ blank=False,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n", "issue": "required fields \nOn OldCantus, to create a source you need both a manuscript ID and a siglum (fields marked with asterisk) otherwise it won't create the source. \r\nNewCantus has no asterisks on these fields, and was quite happy to let me make sources with no siglum (though it does tell me to fill out an ID field if I try to submit without it.)\r\n\r\nOn the chant level, Folio and Sequence seem to be required fields (they are not on OldCantus!) but are not marked as such with asterisks, either. \n", "before_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel, Segment\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n (\"Unpublished / No indexing activity\", \"Unpublished / No indexing activity\"),\n ]\n\n # The old Cantus uses two fields to jointly control the access to sources. \n # Here in the new Cantus, we only use one field, and there are two levels: published and unpublished.\n # Published sources are available to the public. \n # Unpublished sources are hidden from the list and cannot be accessed by URL until the user logs in.\n published = models.BooleanField(blank=False, null=False, default=False)\n\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=True, \n blank=True,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n related_name=\"sources\",\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\", blank=True)\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\", blank=True)\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_user_can_edit\", blank=True)\n \n inventoried_by = models.ManyToManyField(\n get_user_model(), related_name=\"inventoried_sources\", blank=True\n )\n full_text_entered_by = models.ManyToManyField(\n get_user_model(), related_name=\"entered_full_text_for_sources\", blank=True\n )\n melodies_entered_by = models.ManyToManyField(\n get_user_model(), related_name=\"entered_melody_for_sources\", blank=True\n )\n proofreaders = models.ManyToManyField(get_user_model(), related_name=\"proofread_sources\", blank=True)\n other_editors = models.ManyToManyField(get_user_model(), related_name=\"edited_sources\", blank=True)\n \n\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, choices=source_status_choices, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n # number_of_chants and number_of_melodies are used for rendering the source-list page (perhaps among other places)\n # they are automatically recalculated in main_app.signals.update_source_chant_count and\n # main_app.signals.update_source_melody_count every time a chant or sequence is saved or deleted\n number_of_chants = models.IntegerField(blank=True, null=True)\n number_of_melodies = models.IntegerField(blank=True, null=True)\n\n def __str__(self):\n string = '[{s}] {t} ({i})'.format(s=self.rism_siglum, t=self.title, i=self.id)\n return string\n\n def save(self, *args, **kwargs):\n # when creating a source, assign it to \"CANTUS Database\" segment by default\n if not self.segment:\n cantus_db_segment = Segment.objects.get(name=\"CANTUS Database\")\n self.segment = cantus_db_segment\n super().save(*args, **kwargs)\n", "path": "django/cantusdb_project/main_app/models/source.py"}], "after_files": [{"content": "from django.db import models\nfrom main_app.models import BaseModel, Segment\nfrom django.contrib.auth import get_user_model\n\n\nclass Source(BaseModel):\n cursus_choices = [(\"Monastic\", \"Monastic\"), (\"Secular\", \"Secular\")]\n source_status_choices = [\n (\n \"Editing process (not all the fields have been proofread)\",\n \"Editing process (not all the fields have been proofread)\",\n ),\n (\"Published / Complete\", \"Published / Complete\"),\n (\"Published / Proofread pending\", \"Published / Proofread pending\"),\n (\"Unpublished / Editing process\", \"Unpublished / Editing process\"),\n (\"Unpublished / Indexing process\", \"Unpublished / Indexing process\"),\n (\"Unpublished / Proofread pending\", \"Unpublished / Proofread pending\"),\n (\"Unpublished / Proofreading process\", \"Unpublished / Proofreading process\"),\n (\"Unpublished / No indexing activity\", \"Unpublished / No indexing activity\"),\n ]\n\n # The old Cantus uses two fields to jointly control the access to sources. \n # Here in the new Cantus, we only use one field, and there are two levels: published and unpublished.\n # Published sources are available to the public. \n # Unpublished sources are hidden from the list and cannot be accessed by URL until the user logs in.\n published = models.BooleanField(blank=False, null=False, default=False)\n\n title = models.CharField(\n max_length=255,\n help_text=\"Full Manuscript Identification (City, Archive, Shelf-mark)\",\n )\n # the siglum field as implemented on the old Cantus is composed of both the RISM siglum and the shelfmark\n # it is a human-readable ID for a source\n siglum = models.CharField(\n max_length=63, \n null=False, \n blank=False,\n help_text=\"RISM-style siglum + Shelf-mark (e.g. GB-Ob 202).\",\n )\n # the RISM siglum uniquely identifies a library or holding institution\n rism_siglum = models.ForeignKey(\n \"RismSiglum\", on_delete=models.PROTECT, null=True, blank=True,\n )\n provenance = models.ForeignKey(\n \"Provenance\",\n on_delete=models.PROTECT,\n help_text=\"If the origin is unknown, select a location where the source was \"\n \"used later in its lifetime and provide details in the \"\n '\"Provenance notes\" field.',\n null=True,\n blank=True,\n related_name=\"sources\",\n )\n provenance_notes = models.TextField(\n blank=True,\n null=True,\n help_text=\"More exact indication of the provenance (if necessary)\",\n )\n full_source = models.BooleanField(blank=True, null=True)\n date = models.CharField(\n blank=True,\n null=True,\n max_length=63,\n help_text='Date of the manuscript (e.g. \"1200s\", \"1300-1350\", etc.)',\n )\n century = models.ManyToManyField(\"Century\", related_name=\"sources\", blank=True)\n notation = models.ManyToManyField(\"Notation\", related_name=\"sources\", blank=True)\n cursus = models.CharField(\n blank=True, null=True, choices=cursus_choices, max_length=63\n )\n current_editors = models.ManyToManyField(get_user_model(), related_name=\"sources_user_can_edit\", blank=True)\n \n inventoried_by = models.ManyToManyField(\n get_user_model(), related_name=\"inventoried_sources\", blank=True\n )\n full_text_entered_by = models.ManyToManyField(\n get_user_model(), related_name=\"entered_full_text_for_sources\", blank=True\n )\n melodies_entered_by = models.ManyToManyField(\n get_user_model(), related_name=\"entered_melody_for_sources\", blank=True\n )\n proofreaders = models.ManyToManyField(get_user_model(), related_name=\"proofread_sources\", blank=True)\n other_editors = models.ManyToManyField(get_user_model(), related_name=\"edited_sources\", blank=True)\n \n\n segment = models.ForeignKey(\n \"Segment\", on_delete=models.PROTECT, blank=True, null=True\n )\n source_status = models.CharField(blank=True, null=True, choices=source_status_choices, max_length=255)\n complete_inventory = models.BooleanField(blank=True, null=True)\n summary = models.TextField(blank=True, null=True)\n liturgical_occasions = models.TextField(blank=True, null=True)\n description = models.TextField(blank=True, null=True)\n selected_bibliography = models.TextField(blank=True, null=True)\n image_link = models.URLField(\n blank=True, \n null=True,\n help_text='HTTP link to the image gallery of the source.',\n )\n indexing_notes = models.TextField(blank=True, null=True)\n indexing_date = models.TextField(blank=True, null=True)\n json_info = models.JSONField(blank=True, null=True)\n fragmentarium_id = models.CharField(max_length=15, blank=True, null=True)\n dact_id = models.CharField(max_length=15, blank=True, null=True)\n\n # number_of_chants and number_of_melodies are used for rendering the source-list page (perhaps among other places)\n # they are automatically recalculated in main_app.signals.update_source_chant_count and\n # main_app.signals.update_source_melody_count every time a chant or sequence is saved or deleted\n number_of_chants = models.IntegerField(blank=True, null=True)\n number_of_melodies = models.IntegerField(blank=True, null=True)\n\n def __str__(self):\n string = '[{s}] {t} ({i})'.format(s=self.rism_siglum, t=self.title, i=self.id)\n return string\n\n def save(self, *args, **kwargs):\n # when creating a source, assign it to \"CANTUS Database\" segment by default\n if not self.segment:\n cantus_db_segment = Segment.objects.get(name=\"CANTUS Database\")\n self.segment = cantus_db_segment\n super().save(*args, **kwargs)\n", "path": "django/cantusdb_project/main_app/models/source.py"}]} | 1,934 | 166 |
gh_patches_debug_26278 | rasdani/github-patches | git_diff | apache__airflow-13012 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
from airflow.operators.python import PythonOperator does not work
This is not necessarily a bug in core Airflow, but the upgrade-check scripts recommend this as a solution when the old 1.10.x version of importing the python operator is used.
So, there is a mismatch between the core Airflow code and the recommendations given in the upgrade check.
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
These questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**:
**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
**Environment**:
- **Cloud provider or hardware configuration**:
- **OS** (e.g. from /etc/os-release):
- **Kernel** (e.g. `uname -a`):
- **Install tools**:
- **Others**:
**What happened**:
<!-- (please include exact error messages if you can) -->
**What you expected to happen**:
<!-- What do you think went wrong? -->
**How to reproduce it**:
<!---
As minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.
If you are using kubernetes, please attempt to recreate the issue using minikube or kind.
## Install minikube/kind
- Minikube https://minikube.sigs.k8s.io/docs/start/
- Kind https://kind.sigs.k8s.io/docs/user/quick-start/
If this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action
You can include images using the .md style of

To record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.
--->
**Anything else we need to know**:
<!--
How often does this problem occur? Once? Every time etc?
Any relevant logs to include? Put them here in side a detail tag:
<details><summary>x.log</summary> lots of stuff </details>
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `airflow/upgrade/rules/import_changes.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 import itertools
19 from typing import NamedTuple, Optional, List
20
21 from cached_property import cached_property
22 from packaging.version import Version
23
24 from airflow import conf
25 from airflow.upgrade.rules.base_rule import BaseRule
26 from airflow.upgrade.rules.renamed_classes import ALL
27 from airflow.utils.dag_processing import list_py_file_paths
28
29 try:
30 from importlib_metadata import PackageNotFoundError, distribution
31 except ImportError:
32 from importlib.metadata import PackageNotFoundError, distribution
33
34
35 class ImportChange(
36 NamedTuple(
37 "ImportChange",
38 [("old_path", str), ("new_path", str), ("providers_package", Optional[str])],
39 )
40 ):
41 def info(self, file_path=None):
42 msg = "Using `{}` should be replaced by `{}`".format(self.old_path, self.new_path)
43 if file_path:
44 msg += ". Affected file: {}".format(file_path)
45 return msg
46
47 @cached_property
48 def old_class(self):
49 return self.old_path.split(".")[-1]
50
51 @cached_property
52 def new_class(self):
53 return self.new_path.split(".")[-1]
54
55 @classmethod
56 def provider_stub_from_module(cls, module):
57 if "providers" not in module:
58 return None
59
60 # [2:] strips off the airflow.providers. part
61 parts = module.split(".")[2:]
62 if parts[0] in ('apache', 'cncf', 'microsoft'):
63 return '-'.join(parts[:2])
64 return parts[0]
65
66 @classmethod
67 def from_new_old_paths(cls, new_path, old_path):
68 providers_package = cls.provider_stub_from_module(new_path)
69 return cls(
70 old_path=old_path, new_path=new_path, providers_package=providers_package
71 )
72
73
74 class ImportChangesRule(BaseRule):
75 title = "Changes in import paths of hooks, operators, sensors and others"
76 description = (
77 "Many hooks, operators and other classes has been renamed and moved. Those changes were part of "
78 "unifying names and imports paths as described in AIP-21.\nThe `contrib` folder has been replaced "
79 "by `providers` directory and packages:\n"
80 "https://github.com/apache/airflow#backport-packages"
81 )
82
83 ALL_CHANGES = [
84 ImportChange.from_new_old_paths(*args) for args in ALL
85 ] # type: List[ImportChange]
86
87 @staticmethod
88 def _check_file(file_path):
89 problems = []
90 providers = set()
91 with open(file_path, "r") as file:
92 content = file.read()
93 for change in ImportChangesRule.ALL_CHANGES:
94 if change.old_class in content:
95 problems.append(change.info(file_path))
96 if change.providers_package:
97 providers.add(change.providers_package)
98 return problems, providers
99
100 @staticmethod
101 def _check_missing_providers(providers):
102
103 current_airflow_version = Version(__import__("airflow").__version__)
104 if current_airflow_version.major >= 2:
105 prefix = "apache-airflow-providers-"
106 else:
107 prefix = "apache-airflow-backport-providers-"
108
109 for provider in providers:
110 dist_name = prefix + provider
111 try:
112 distribution(dist_name)
113 except PackageNotFoundError:
114 yield "Please install `{}`".format(dist_name)
115
116 def check(self):
117 dag_folder = conf.get("core", "dags_folder")
118 files = list_py_file_paths(directory=dag_folder, include_examples=False)
119 problems = []
120 providers = set()
121 # Split in to two groups - install backports first, then make changes
122 for file in files:
123 new_problems, new_providers = self._check_file(file)
124 problems.extend(new_problems)
125 providers |= new_providers
126
127 return itertools.chain(
128 self._check_missing_providers(sorted(providers)),
129 problems,
130 )
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/airflow/upgrade/rules/import_changes.py b/airflow/upgrade/rules/import_changes.py
--- a/airflow/upgrade/rules/import_changes.py
+++ b/airflow/upgrade/rules/import_changes.py
@@ -39,7 +39,9 @@
)
):
def info(self, file_path=None):
- msg = "Using `{}` should be replaced by `{}`".format(self.old_path, self.new_path)
+ msg = "Using `{}` should be replaced by `{}`".format(
+ self.old_path, self.new_path
+ )
if file_path:
msg += ". Affected file: {}".format(file_path)
return msg
@@ -80,10 +82,30 @@
"https://github.com/apache/airflow#backport-packages"
)
+ current_airflow_version = Version(__import__("airflow").__version__)
+
+ if current_airflow_version < Version("2.0.0"):
+
+ def _filter_incompatible_renames(arg):
+ new_path = arg[1]
+ return (
+ not new_path.startswith("airflow.operators")
+ and not new_path.startswith("airflow.sensors")
+ and not new_path.startswith("airflow.hooks")
+ )
+
+ else:
+ # Everything allowed on 2.0.0+
+ def _filter_incompatible_renames(arg):
+ return True
+
ALL_CHANGES = [
- ImportChange.from_new_old_paths(*args) for args in ALL
+ ImportChange.from_new_old_paths(*args)
+ for args in filter(_filter_incompatible_renames, ALL)
] # type: List[ImportChange]
+ del _filter_incompatible_renames
+
@staticmethod
def _check_file(file_path):
problems = []
| {"golden_diff": "diff --git a/airflow/upgrade/rules/import_changes.py b/airflow/upgrade/rules/import_changes.py\n--- a/airflow/upgrade/rules/import_changes.py\n+++ b/airflow/upgrade/rules/import_changes.py\n@@ -39,7 +39,9 @@\n )\n ):\n def info(self, file_path=None):\n- msg = \"Using `{}` should be replaced by `{}`\".format(self.old_path, self.new_path)\n+ msg = \"Using `{}` should be replaced by `{}`\".format(\n+ self.old_path, self.new_path\n+ )\n if file_path:\n msg += \". Affected file: {}\".format(file_path)\n return msg\n@@ -80,10 +82,30 @@\n \"https://github.com/apache/airflow#backport-packages\"\n )\n \n+ current_airflow_version = Version(__import__(\"airflow\").__version__)\n+\n+ if current_airflow_version < Version(\"2.0.0\"):\n+\n+ def _filter_incompatible_renames(arg):\n+ new_path = arg[1]\n+ return (\n+ not new_path.startswith(\"airflow.operators\")\n+ and not new_path.startswith(\"airflow.sensors\")\n+ and not new_path.startswith(\"airflow.hooks\")\n+ )\n+\n+ else:\n+ # Everything allowed on 2.0.0+\n+ def _filter_incompatible_renames(arg):\n+ return True\n+\n ALL_CHANGES = [\n- ImportChange.from_new_old_paths(*args) for args in ALL\n+ ImportChange.from_new_old_paths(*args)\n+ for args in filter(_filter_incompatible_renames, ALL)\n ] # type: List[ImportChange]\n \n+ del _filter_incompatible_renames\n+\n @staticmethod\n def _check_file(file_path):\n problems = []\n", "issue": "from airflow.operators.python import PythonOperator does not work\nThis is not necessarily a bug in core Airflow, but the upgrade-check scripts recommend this as a solution when the old 1.10.x version of importing the python operator is used. \r\n\r\nSo, there is a mismatch between the core Airflow code and the recommendations given in the upgrade check. \r\n\r\n<!--\r\n\r\nWelcome to Apache Airflow! For a smooth issue process, try to answer the following questions.\r\nDon't worry if they're not all applicable; just try to include what you can :-)\r\n\r\nIf you need to include code snippets or logs, please put them in fenced code\r\nblocks. If they're super-long, please use the details tag like\r\n<details><summary>super-long log</summary> lots of stuff </details>\r\n\r\nPlease delete these comment blocks before submitting the issue.\r\n\r\n-->\r\n\r\n<!--\r\n\r\nIMPORTANT!!!\r\n\r\nPLEASE CHECK \"SIMILAR TO X EXISTING ISSUES\" OPTION IF VISIBLE\r\nNEXT TO \"SUBMIT NEW ISSUE\" BUTTON!!!\r\n\r\nPLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!\r\n\r\nPlease complete the next sections or the issue will be closed.\r\nThese questions are the first thing we need to know to understand the context.\r\n\r\n-->\r\n\r\n**Apache Airflow version**:\r\n\r\n\r\n**Kubernetes version (if you are using kubernetes)** (use `kubectl version`):\r\n\r\n**Environment**:\r\n\r\n- **Cloud provider or hardware configuration**:\r\n- **OS** (e.g. from /etc/os-release):\r\n- **Kernel** (e.g. `uname -a`):\r\n- **Install tools**:\r\n- **Others**:\r\n\r\n**What happened**:\r\n\r\n<!-- (please include exact error messages if you can) -->\r\n\r\n**What you expected to happen**:\r\n\r\n<!-- What do you think went wrong? -->\r\n\r\n**How to reproduce it**:\r\n<!---\r\n\r\nAs minimally and precisely as possible. Keep in mind we do not have access to your cluster or dags.\r\n\r\nIf you are using kubernetes, please attempt to recreate the issue using minikube or kind.\r\n\r\n## Install minikube/kind\r\n\r\n- Minikube https://minikube.sigs.k8s.io/docs/start/\r\n- Kind https://kind.sigs.k8s.io/docs/user/quick-start/\r\n\r\nIf this is a UI bug, please provide a screenshot of the bug or a link to a youtube video of the bug in action\r\n\r\nYou can include images using the .md style of\r\n\r\n\r\nTo record a screencast, mac users can use QuickTime and then create an unlisted youtube video with the resulting .mov file.\r\n\r\n--->\r\n\r\n\r\n**Anything else we need to know**:\r\n\r\n<!--\r\n\r\nHow often does this problem occur? Once? Every time etc?\r\n\r\nAny relevant logs to include? Put them here in side a detail tag:\r\n<details><summary>x.log</summary> lots of stuff </details>\r\n\r\n-->\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nimport itertools\nfrom typing import NamedTuple, Optional, List\n\nfrom cached_property import cached_property\nfrom packaging.version import Version\n\nfrom airflow import conf\nfrom airflow.upgrade.rules.base_rule import BaseRule\nfrom airflow.upgrade.rules.renamed_classes import ALL\nfrom airflow.utils.dag_processing import list_py_file_paths\n\ntry:\n from importlib_metadata import PackageNotFoundError, distribution\nexcept ImportError:\n from importlib.metadata import PackageNotFoundError, distribution\n\n\nclass ImportChange(\n NamedTuple(\n \"ImportChange\",\n [(\"old_path\", str), (\"new_path\", str), (\"providers_package\", Optional[str])],\n )\n):\n def info(self, file_path=None):\n msg = \"Using `{}` should be replaced by `{}`\".format(self.old_path, self.new_path)\n if file_path:\n msg += \". Affected file: {}\".format(file_path)\n return msg\n\n @cached_property\n def old_class(self):\n return self.old_path.split(\".\")[-1]\n\n @cached_property\n def new_class(self):\n return self.new_path.split(\".\")[-1]\n\n @classmethod\n def provider_stub_from_module(cls, module):\n if \"providers\" not in module:\n return None\n\n # [2:] strips off the airflow.providers. part\n parts = module.split(\".\")[2:]\n if parts[0] in ('apache', 'cncf', 'microsoft'):\n return '-'.join(parts[:2])\n return parts[0]\n\n @classmethod\n def from_new_old_paths(cls, new_path, old_path):\n providers_package = cls.provider_stub_from_module(new_path)\n return cls(\n old_path=old_path, new_path=new_path, providers_package=providers_package\n )\n\n\nclass ImportChangesRule(BaseRule):\n title = \"Changes in import paths of hooks, operators, sensors and others\"\n description = (\n \"Many hooks, operators and other classes has been renamed and moved. Those changes were part of \"\n \"unifying names and imports paths as described in AIP-21.\\nThe `contrib` folder has been replaced \"\n \"by `providers` directory and packages:\\n\"\n \"https://github.com/apache/airflow#backport-packages\"\n )\n\n ALL_CHANGES = [\n ImportChange.from_new_old_paths(*args) for args in ALL\n ] # type: List[ImportChange]\n\n @staticmethod\n def _check_file(file_path):\n problems = []\n providers = set()\n with open(file_path, \"r\") as file:\n content = file.read()\n for change in ImportChangesRule.ALL_CHANGES:\n if change.old_class in content:\n problems.append(change.info(file_path))\n if change.providers_package:\n providers.add(change.providers_package)\n return problems, providers\n\n @staticmethod\n def _check_missing_providers(providers):\n\n current_airflow_version = Version(__import__(\"airflow\").__version__)\n if current_airflow_version.major >= 2:\n prefix = \"apache-airflow-providers-\"\n else:\n prefix = \"apache-airflow-backport-providers-\"\n\n for provider in providers:\n dist_name = prefix + provider\n try:\n distribution(dist_name)\n except PackageNotFoundError:\n yield \"Please install `{}`\".format(dist_name)\n\n def check(self):\n dag_folder = conf.get(\"core\", \"dags_folder\")\n files = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n providers = set()\n # Split in to two groups - install backports first, then make changes\n for file in files:\n new_problems, new_providers = self._check_file(file)\n problems.extend(new_problems)\n providers |= new_providers\n\n return itertools.chain(\n self._check_missing_providers(sorted(providers)),\n problems,\n )\n", "path": "airflow/upgrade/rules/import_changes.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nimport itertools\nfrom typing import NamedTuple, Optional, List\n\nfrom cached_property import cached_property\nfrom packaging.version import Version\n\nfrom airflow import conf\nfrom airflow.upgrade.rules.base_rule import BaseRule\nfrom airflow.upgrade.rules.renamed_classes import ALL\nfrom airflow.utils.dag_processing import list_py_file_paths\n\ntry:\n from importlib_metadata import PackageNotFoundError, distribution\nexcept ImportError:\n from importlib.metadata import PackageNotFoundError, distribution\n\n\nclass ImportChange(\n NamedTuple(\n \"ImportChange\",\n [(\"old_path\", str), (\"new_path\", str), (\"providers_package\", Optional[str])],\n )\n):\n def info(self, file_path=None):\n msg = \"Using `{}` should be replaced by `{}`\".format(\n self.old_path, self.new_path\n )\n if file_path:\n msg += \". Affected file: {}\".format(file_path)\n return msg\n\n @cached_property\n def old_class(self):\n return self.old_path.split(\".\")[-1]\n\n @cached_property\n def new_class(self):\n return self.new_path.split(\".\")[-1]\n\n @classmethod\n def provider_stub_from_module(cls, module):\n if \"providers\" not in module:\n return None\n\n # [2:] strips off the airflow.providers. part\n parts = module.split(\".\")[2:]\n if parts[0] in ('apache', 'cncf', 'microsoft'):\n return '-'.join(parts[:2])\n return parts[0]\n\n @classmethod\n def from_new_old_paths(cls, new_path, old_path):\n providers_package = cls.provider_stub_from_module(new_path)\n return cls(\n old_path=old_path, new_path=new_path, providers_package=providers_package\n )\n\n\nclass ImportChangesRule(BaseRule):\n title = \"Changes in import paths of hooks, operators, sensors and others\"\n description = (\n \"Many hooks, operators and other classes has been renamed and moved. Those changes were part of \"\n \"unifying names and imports paths as described in AIP-21.\\nThe `contrib` folder has been replaced \"\n \"by `providers` directory and packages:\\n\"\n \"https://github.com/apache/airflow#backport-packages\"\n )\n\n current_airflow_version = Version(__import__(\"airflow\").__version__)\n\n if current_airflow_version < Version(\"2.0.0\"):\n\n def _filter_incompatible_renames(arg):\n new_path = arg[1]\n return (\n not new_path.startswith(\"airflow.operators\")\n and not new_path.startswith(\"airflow.sensors\")\n and not new_path.startswith(\"airflow.hooks\")\n )\n\n else:\n # Everything allowed on 2.0.0+\n def _filter_incompatible_renames(arg):\n return True\n\n ALL_CHANGES = [\n ImportChange.from_new_old_paths(*args)\n for args in filter(_filter_incompatible_renames, ALL)\n ] # type: List[ImportChange]\n\n del _filter_incompatible_renames\n\n @staticmethod\n def _check_file(file_path):\n problems = []\n providers = set()\n with open(file_path, \"r\") as file:\n content = file.read()\n for change in ImportChangesRule.ALL_CHANGES:\n if change.old_class in content:\n problems.append(change.info(file_path))\n if change.providers_package:\n providers.add(change.providers_package)\n return problems, providers\n\n @staticmethod\n def _check_missing_providers(providers):\n\n current_airflow_version = Version(__import__(\"airflow\").__version__)\n if current_airflow_version.major >= 2:\n prefix = \"apache-airflow-providers-\"\n else:\n prefix = \"apache-airflow-backport-providers-\"\n\n for provider in providers:\n dist_name = prefix + provider\n try:\n distribution(dist_name)\n except PackageNotFoundError:\n yield \"Please install `{}`\".format(dist_name)\n\n def check(self):\n dag_folder = conf.get(\"core\", \"dags_folder\")\n files = list_py_file_paths(directory=dag_folder, include_examples=False)\n problems = []\n providers = set()\n # Split in to two groups - install backports first, then make changes\n for file in files:\n new_problems, new_providers = self._check_file(file)\n problems.extend(new_problems)\n providers |= new_providers\n\n return itertools.chain(\n self._check_missing_providers(sorted(providers)),\n problems,\n )\n", "path": "airflow/upgrade/rules/import_changes.py"}]} | 2,151 | 402 |
gh_patches_debug_21494 | rasdani/github-patches | git_diff | encode__httpx-637 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
header value surrounded by whitespace
httpx [doesn't accept header value surrounded by whitespace](https://github.com/python-hyper/hyper-h2/blob/13005074d14c7d32f8eaf1683b854446a09d09d3/h2/utilities.py#L265), but other http2 implementations seems to accept them (nghttp2, browsers Firefox, Chrome).
Example:
```
async with httpx.Client(http2=True) as client:
response = await client.get('https://a.searx.space/headervaluetest')
```
Result :
```
ProtocolError: Received header value surrounded by whitespace b'A value '
```
The nghttp2 implementation seems here:
* https://github.com/nghttp2/nghttp2/blob/bb519154fe62f7ff7e5eb7269e05043dec6d3682/lib/nghttp2_http.c#L332
* https://github.com/nghttp2/nghttp2/blob/bb519154fe62f7ff7e5eb7269e05043dec6d3682/lib/nghttp2_helper.c#L498
I'm not sure if it should be declared as a bug or not.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/dispatch/http2.py`
Content:
```
1 import typing
2
3 import h2.connection
4 import h2.events
5 from h2.settings import SettingCodes, Settings
6
7 from ..concurrency.base import (
8 BaseEvent,
9 BaseSocketStream,
10 ConcurrencyBackend,
11 lookup_backend,
12 )
13 from ..config import Timeout
14 from ..exceptions import ProtocolError
15 from ..models import Request, Response
16 from ..utils import get_logger
17 from .base import OpenConnection
18
19 logger = get_logger(__name__)
20
21
22 class HTTP2Connection(OpenConnection):
23 READ_NUM_BYTES = 4096
24
25 def __init__(
26 self,
27 socket: BaseSocketStream,
28 backend: typing.Union[str, ConcurrencyBackend] = "auto",
29 on_release: typing.Callable = None,
30 ):
31 self.socket = socket
32 self.backend = lookup_backend(backend)
33 self.on_release = on_release
34 self.state = h2.connection.H2Connection()
35
36 self.streams = {} # type: typing.Dict[int, HTTP2Stream]
37 self.events = {} # type: typing.Dict[int, typing.List[h2.events.Event]]
38
39 self.init_started = False
40
41 @property
42 def is_http2(self) -> bool:
43 return True
44
45 @property
46 def init_complete(self) -> BaseEvent:
47 # We do this lazily, to make sure backend autodetection always
48 # runs within an async context.
49 if not hasattr(self, "_initialization_complete"):
50 self._initialization_complete = self.backend.create_event()
51 return self._initialization_complete
52
53 async def send(self, request: Request, timeout: Timeout = None) -> Response:
54 timeout = Timeout() if timeout is None else timeout
55
56 if not self.init_started:
57 # The very first stream is responsible for initiating the connection.
58 self.init_started = True
59 await self.send_connection_init(timeout)
60 stream_id = self.state.get_next_available_stream_id()
61 self.init_complete.set()
62 else:
63 # All other streams need to wait until the connection is established.
64 await self.init_complete.wait()
65 stream_id = self.state.get_next_available_stream_id()
66
67 stream = HTTP2Stream(stream_id=stream_id, connection=self)
68 self.streams[stream_id] = stream
69 self.events[stream_id] = []
70 return await stream.send(request, timeout)
71
72 async def send_connection_init(self, timeout: Timeout) -> None:
73 """
74 The HTTP/2 connection requires some initial setup before we can start
75 using individual request/response streams on it.
76 """
77
78 # Need to set these manually here instead of manipulating via
79 # __setitem__() otherwise the H2Connection will emit SettingsUpdate
80 # frames in addition to sending the undesired defaults.
81 self.state.local_settings = Settings(
82 client=True,
83 initial_values={
84 # Disable PUSH_PROMISE frames from the server since we don't do anything
85 # with them for now. Maybe when we support caching?
86 SettingCodes.ENABLE_PUSH: 0,
87 # These two are taken from h2 for safe defaults
88 SettingCodes.MAX_CONCURRENT_STREAMS: 100,
89 SettingCodes.MAX_HEADER_LIST_SIZE: 65536,
90 },
91 )
92
93 # Some websites (*cough* Yahoo *cough*) balk at this setting being
94 # present in the initial handshake since it's not defined in the original
95 # RFC despite the RFC mandating ignoring settings you don't know about.
96 del self.state.local_settings[h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL]
97
98 self.state.initiate_connection()
99 self.state.increment_flow_control_window(2 ** 24)
100 data_to_send = self.state.data_to_send()
101 await self.socket.write(data_to_send, timeout)
102
103 @property
104 def is_closed(self) -> bool:
105 return False
106
107 def is_connection_dropped(self) -> bool:
108 return self.socket.is_connection_dropped()
109
110 async def close(self) -> None:
111 await self.socket.close()
112
113 async def wait_for_outgoing_flow(self, stream_id: int, timeout: Timeout) -> int:
114 """
115 Returns the maximum allowable outgoing flow for a given stream.
116
117 If the allowable flow is zero, then waits on the network until
118 WindowUpdated frames have increased the flow rate.
119
120 https://tools.ietf.org/html/rfc7540#section-6.9
121 """
122 local_flow = self.state.local_flow_control_window(stream_id)
123 connection_flow = self.state.max_outbound_frame_size
124 flow = min(local_flow, connection_flow)
125 while flow == 0:
126 await self.receive_events(timeout)
127 local_flow = self.state.local_flow_control_window(stream_id)
128 connection_flow = self.state.max_outbound_frame_size
129 flow = min(local_flow, connection_flow)
130 return flow
131
132 async def wait_for_event(self, stream_id: int, timeout: Timeout) -> h2.events.Event:
133 """
134 Returns the next event for a given stream.
135
136 If no events are available yet, then waits on the network until
137 an event is available.
138 """
139 while not self.events[stream_id]:
140 await self.receive_events(timeout)
141 return self.events[stream_id].pop(0)
142
143 async def receive_events(self, timeout: Timeout) -> None:
144 """
145 Read some data from the network, and update the H2 state.
146 """
147 data = await self.socket.read(self.READ_NUM_BYTES, timeout)
148 events = self.state.receive_data(data)
149 for event in events:
150 event_stream_id = getattr(event, "stream_id", 0)
151 logger.trace(f"receive_event stream_id={event_stream_id} event={event!r}")
152
153 if hasattr(event, "error_code"):
154 raise ProtocolError(event)
155
156 if event_stream_id in self.events:
157 self.events[event_stream_id].append(event)
158
159 data_to_send = self.state.data_to_send()
160 await self.socket.write(data_to_send, timeout)
161
162 async def send_headers(
163 self,
164 stream_id: int,
165 headers: typing.List[typing.Tuple[bytes, bytes]],
166 timeout: Timeout,
167 ) -> None:
168 self.state.send_headers(stream_id, headers)
169 self.state.increment_flow_control_window(2 ** 24, stream_id=stream_id)
170 data_to_send = self.state.data_to_send()
171 await self.socket.write(data_to_send, timeout)
172
173 async def send_data(self, stream_id: int, chunk: bytes, timeout: Timeout) -> None:
174 self.state.send_data(stream_id, chunk)
175 data_to_send = self.state.data_to_send()
176 await self.socket.write(data_to_send, timeout)
177
178 async def end_stream(self, stream_id: int, timeout: Timeout) -> None:
179 self.state.end_stream(stream_id)
180 data_to_send = self.state.data_to_send()
181 await self.socket.write(data_to_send, timeout)
182
183 async def acknowledge_received_data(
184 self, stream_id: int, amount: int, timeout: Timeout
185 ) -> None:
186 self.state.acknowledge_received_data(amount, stream_id)
187 data_to_send = self.state.data_to_send()
188 await self.socket.write(data_to_send, timeout)
189
190 async def close_stream(self, stream_id: int) -> None:
191 del self.streams[stream_id]
192 del self.events[stream_id]
193
194 if not self.streams and self.on_release is not None:
195 await self.on_release()
196
197
198 class HTTP2Stream:
199 def __init__(self, stream_id: int, connection: HTTP2Connection) -> None:
200 self.stream_id = stream_id
201 self.connection = connection
202
203 async def send(self, request: Request, timeout: Timeout) -> Response:
204 # Send the request.
205 await self.send_headers(request, timeout)
206 await self.send_body(request, timeout)
207
208 # Receive the response.
209 status_code, headers = await self.receive_response(timeout)
210 content = self.body_iter(timeout)
211 return Response(
212 status_code=status_code,
213 http_version="HTTP/2",
214 headers=headers,
215 content=content,
216 on_close=self.close,
217 request=request,
218 )
219
220 async def send_headers(self, request: Request, timeout: Timeout) -> None:
221 headers = [
222 (b":method", request.method.encode("ascii")),
223 (b":authority", request.url.authority.encode("ascii")),
224 (b":scheme", request.url.scheme.encode("ascii")),
225 (b":path", request.url.full_path.encode("ascii")),
226 ] + [(k, v) for k, v in request.headers.raw if k != b"host"]
227
228 logger.trace(
229 f"send_headers "
230 f"stream_id={self.stream_id} "
231 f"method={request.method!r} "
232 f"target={request.url.full_path!r} "
233 f"headers={headers!r}"
234 )
235 await self.connection.send_headers(self.stream_id, headers, timeout)
236
237 async def send_body(self, request: Request, timeout: Timeout) -> None:
238 logger.trace(f"send_body stream_id={self.stream_id}")
239 async for data in request.stream():
240 while data:
241 max_flow = await self.connection.wait_for_outgoing_flow(
242 self.stream_id, timeout
243 )
244 chunk_size = min(len(data), max_flow)
245 chunk, data = data[:chunk_size], data[chunk_size:]
246 await self.connection.send_data(self.stream_id, chunk, timeout)
247
248 await self.connection.end_stream(self.stream_id, timeout)
249
250 async def receive_response(
251 self, timeout: Timeout
252 ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]:
253 """
254 Read the response status and headers from the network.
255 """
256 while True:
257 event = await self.connection.wait_for_event(self.stream_id, timeout)
258 if isinstance(event, h2.events.ResponseReceived):
259 break
260
261 status_code = 200
262 headers = []
263 for k, v in event.headers:
264 if k == b":status":
265 status_code = int(v.decode("ascii", errors="ignore"))
266 elif not k.startswith(b":"):
267 headers.append((k, v))
268
269 return (status_code, headers)
270
271 async def body_iter(self, timeout: Timeout) -> typing.AsyncIterator[bytes]:
272 while True:
273 event = await self.connection.wait_for_event(self.stream_id, timeout)
274 if isinstance(event, h2.events.DataReceived):
275 amount = event.flow_controlled_length
276 await self.connection.acknowledge_received_data(
277 self.stream_id, amount, timeout
278 )
279 yield event.data
280 elif isinstance(event, (h2.events.StreamEnded, h2.events.StreamReset)):
281 break
282
283 async def close(self) -> None:
284 await self.connection.close_stream(self.stream_id)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/dispatch/http2.py b/httpx/dispatch/http2.py
--- a/httpx/dispatch/http2.py
+++ b/httpx/dispatch/http2.py
@@ -2,6 +2,7 @@
import h2.connection
import h2.events
+from h2.config import H2Configuration
from h2.settings import SettingCodes, Settings
from ..concurrency.base import (
@@ -21,6 +22,7 @@
class HTTP2Connection(OpenConnection):
READ_NUM_BYTES = 4096
+ CONFIG = H2Configuration(validate_inbound_headers=False)
def __init__(
self,
@@ -31,7 +33,7 @@
self.socket = socket
self.backend = lookup_backend(backend)
self.on_release = on_release
- self.state = h2.connection.H2Connection()
+ self.state = h2.connection.H2Connection(config=self.CONFIG)
self.streams = {} # type: typing.Dict[int, HTTP2Stream]
self.events = {} # type: typing.Dict[int, typing.List[h2.events.Event]]
| {"golden_diff": "diff --git a/httpx/dispatch/http2.py b/httpx/dispatch/http2.py\n--- a/httpx/dispatch/http2.py\n+++ b/httpx/dispatch/http2.py\n@@ -2,6 +2,7 @@\n \n import h2.connection\n import h2.events\n+from h2.config import H2Configuration\n from h2.settings import SettingCodes, Settings\n \n from ..concurrency.base import (\n@@ -21,6 +22,7 @@\n \n class HTTP2Connection(OpenConnection):\n READ_NUM_BYTES = 4096\n+ CONFIG = H2Configuration(validate_inbound_headers=False)\n \n def __init__(\n self,\n@@ -31,7 +33,7 @@\n self.socket = socket\n self.backend = lookup_backend(backend)\n self.on_release = on_release\n- self.state = h2.connection.H2Connection()\n+ self.state = h2.connection.H2Connection(config=self.CONFIG)\n \n self.streams = {} # type: typing.Dict[int, HTTP2Stream]\n self.events = {} # type: typing.Dict[int, typing.List[h2.events.Event]]\n", "issue": "header value surrounded by whitespace\nhttpx [doesn't accept header value surrounded by whitespace](https://github.com/python-hyper/hyper-h2/blob/13005074d14c7d32f8eaf1683b854446a09d09d3/h2/utilities.py#L265), but other http2 implementations seems to accept them (nghttp2, browsers Firefox, Chrome).\r\n\r\nExample:\r\n```\r\nasync with httpx.Client(http2=True) as client:\r\n\tresponse = await client.get('https://a.searx.space/headervaluetest')\r\n```\r\n\r\nResult :\r\n```\r\nProtocolError: Received header value surrounded by whitespace b'A value '\r\n```\r\n\r\nThe nghttp2 implementation seems here:\r\n* https://github.com/nghttp2/nghttp2/blob/bb519154fe62f7ff7e5eb7269e05043dec6d3682/lib/nghttp2_http.c#L332\r\n* https://github.com/nghttp2/nghttp2/blob/bb519154fe62f7ff7e5eb7269e05043dec6d3682/lib/nghttp2_helper.c#L498\r\n\r\nI'm not sure if it should be declared as a bug or not.\n", "before_files": [{"content": "import typing\n\nimport h2.connection\nimport h2.events\nfrom h2.settings import SettingCodes, Settings\n\nfrom ..concurrency.base import (\n BaseEvent,\n BaseSocketStream,\n ConcurrencyBackend,\n lookup_backend,\n)\nfrom ..config import Timeout\nfrom ..exceptions import ProtocolError\nfrom ..models import Request, Response\nfrom ..utils import get_logger\nfrom .base import OpenConnection\n\nlogger = get_logger(__name__)\n\n\nclass HTTP2Connection(OpenConnection):\n READ_NUM_BYTES = 4096\n\n def __init__(\n self,\n socket: BaseSocketStream,\n backend: typing.Union[str, ConcurrencyBackend] = \"auto\",\n on_release: typing.Callable = None,\n ):\n self.socket = socket\n self.backend = lookup_backend(backend)\n self.on_release = on_release\n self.state = h2.connection.H2Connection()\n\n self.streams = {} # type: typing.Dict[int, HTTP2Stream]\n self.events = {} # type: typing.Dict[int, typing.List[h2.events.Event]]\n\n self.init_started = False\n\n @property\n def is_http2(self) -> bool:\n return True\n\n @property\n def init_complete(self) -> BaseEvent:\n # We do this lazily, to make sure backend autodetection always\n # runs within an async context.\n if not hasattr(self, \"_initialization_complete\"):\n self._initialization_complete = self.backend.create_event()\n return self._initialization_complete\n\n async def send(self, request: Request, timeout: Timeout = None) -> Response:\n timeout = Timeout() if timeout is None else timeout\n\n if not self.init_started:\n # The very first stream is responsible for initiating the connection.\n self.init_started = True\n await self.send_connection_init(timeout)\n stream_id = self.state.get_next_available_stream_id()\n self.init_complete.set()\n else:\n # All other streams need to wait until the connection is established.\n await self.init_complete.wait()\n stream_id = self.state.get_next_available_stream_id()\n\n stream = HTTP2Stream(stream_id=stream_id, connection=self)\n self.streams[stream_id] = stream\n self.events[stream_id] = []\n return await stream.send(request, timeout)\n\n async def send_connection_init(self, timeout: Timeout) -> None:\n \"\"\"\n The HTTP/2 connection requires some initial setup before we can start\n using individual request/response streams on it.\n \"\"\"\n\n # Need to set these manually here instead of manipulating via\n # __setitem__() otherwise the H2Connection will emit SettingsUpdate\n # frames in addition to sending the undesired defaults.\n self.state.local_settings = Settings(\n client=True,\n initial_values={\n # Disable PUSH_PROMISE frames from the server since we don't do anything\n # with them for now. Maybe when we support caching?\n SettingCodes.ENABLE_PUSH: 0,\n # These two are taken from h2 for safe defaults\n SettingCodes.MAX_CONCURRENT_STREAMS: 100,\n SettingCodes.MAX_HEADER_LIST_SIZE: 65536,\n },\n )\n\n # Some websites (*cough* Yahoo *cough*) balk at this setting being\n # present in the initial handshake since it's not defined in the original\n # RFC despite the RFC mandating ignoring settings you don't know about.\n del self.state.local_settings[h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL]\n\n self.state.initiate_connection()\n self.state.increment_flow_control_window(2 ** 24)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n @property\n def is_closed(self) -> bool:\n return False\n\n def is_connection_dropped(self) -> bool:\n return self.socket.is_connection_dropped()\n\n async def close(self) -> None:\n await self.socket.close()\n\n async def wait_for_outgoing_flow(self, stream_id: int, timeout: Timeout) -> int:\n \"\"\"\n Returns the maximum allowable outgoing flow for a given stream.\n\n If the allowable flow is zero, then waits on the network until\n WindowUpdated frames have increased the flow rate.\n\n https://tools.ietf.org/html/rfc7540#section-6.9\n \"\"\"\n local_flow = self.state.local_flow_control_window(stream_id)\n connection_flow = self.state.max_outbound_frame_size\n flow = min(local_flow, connection_flow)\n while flow == 0:\n await self.receive_events(timeout)\n local_flow = self.state.local_flow_control_window(stream_id)\n connection_flow = self.state.max_outbound_frame_size\n flow = min(local_flow, connection_flow)\n return flow\n\n async def wait_for_event(self, stream_id: int, timeout: Timeout) -> h2.events.Event:\n \"\"\"\n Returns the next event for a given stream.\n\n If no events are available yet, then waits on the network until\n an event is available.\n \"\"\"\n while not self.events[stream_id]:\n await self.receive_events(timeout)\n return self.events[stream_id].pop(0)\n\n async def receive_events(self, timeout: Timeout) -> None:\n \"\"\"\n Read some data from the network, and update the H2 state.\n \"\"\"\n data = await self.socket.read(self.READ_NUM_BYTES, timeout)\n events = self.state.receive_data(data)\n for event in events:\n event_stream_id = getattr(event, \"stream_id\", 0)\n logger.trace(f\"receive_event stream_id={event_stream_id} event={event!r}\")\n\n if hasattr(event, \"error_code\"):\n raise ProtocolError(event)\n\n if event_stream_id in self.events:\n self.events[event_stream_id].append(event)\n\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def send_headers(\n self,\n stream_id: int,\n headers: typing.List[typing.Tuple[bytes, bytes]],\n timeout: Timeout,\n ) -> None:\n self.state.send_headers(stream_id, headers)\n self.state.increment_flow_control_window(2 ** 24, stream_id=stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def send_data(self, stream_id: int, chunk: bytes, timeout: Timeout) -> None:\n self.state.send_data(stream_id, chunk)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def end_stream(self, stream_id: int, timeout: Timeout) -> None:\n self.state.end_stream(stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def acknowledge_received_data(\n self, stream_id: int, amount: int, timeout: Timeout\n ) -> None:\n self.state.acknowledge_received_data(amount, stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def close_stream(self, stream_id: int) -> None:\n del self.streams[stream_id]\n del self.events[stream_id]\n\n if not self.streams and self.on_release is not None:\n await self.on_release()\n\n\nclass HTTP2Stream:\n def __init__(self, stream_id: int, connection: HTTP2Connection) -> None:\n self.stream_id = stream_id\n self.connection = connection\n\n async def send(self, request: Request, timeout: Timeout) -> Response:\n # Send the request.\n await self.send_headers(request, timeout)\n await self.send_body(request, timeout)\n\n # Receive the response.\n status_code, headers = await self.receive_response(timeout)\n content = self.body_iter(timeout)\n return Response(\n status_code=status_code,\n http_version=\"HTTP/2\",\n headers=headers,\n content=content,\n on_close=self.close,\n request=request,\n )\n\n async def send_headers(self, request: Request, timeout: Timeout) -> None:\n headers = [\n (b\":method\", request.method.encode(\"ascii\")),\n (b\":authority\", request.url.authority.encode(\"ascii\")),\n (b\":scheme\", request.url.scheme.encode(\"ascii\")),\n (b\":path\", request.url.full_path.encode(\"ascii\")),\n ] + [(k, v) for k, v in request.headers.raw if k != b\"host\"]\n\n logger.trace(\n f\"send_headers \"\n f\"stream_id={self.stream_id} \"\n f\"method={request.method!r} \"\n f\"target={request.url.full_path!r} \"\n f\"headers={headers!r}\"\n )\n await self.connection.send_headers(self.stream_id, headers, timeout)\n\n async def send_body(self, request: Request, timeout: Timeout) -> None:\n logger.trace(f\"send_body stream_id={self.stream_id}\")\n async for data in request.stream():\n while data:\n max_flow = await self.connection.wait_for_outgoing_flow(\n self.stream_id, timeout\n )\n chunk_size = min(len(data), max_flow)\n chunk, data = data[:chunk_size], data[chunk_size:]\n await self.connection.send_data(self.stream_id, chunk, timeout)\n\n await self.connection.end_stream(self.stream_id, timeout)\n\n async def receive_response(\n self, timeout: Timeout\n ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]:\n \"\"\"\n Read the response status and headers from the network.\n \"\"\"\n while True:\n event = await self.connection.wait_for_event(self.stream_id, timeout)\n if isinstance(event, h2.events.ResponseReceived):\n break\n\n status_code = 200\n headers = []\n for k, v in event.headers:\n if k == b\":status\":\n status_code = int(v.decode(\"ascii\", errors=\"ignore\"))\n elif not k.startswith(b\":\"):\n headers.append((k, v))\n\n return (status_code, headers)\n\n async def body_iter(self, timeout: Timeout) -> typing.AsyncIterator[bytes]:\n while True:\n event = await self.connection.wait_for_event(self.stream_id, timeout)\n if isinstance(event, h2.events.DataReceived):\n amount = event.flow_controlled_length\n await self.connection.acknowledge_received_data(\n self.stream_id, amount, timeout\n )\n yield event.data\n elif isinstance(event, (h2.events.StreamEnded, h2.events.StreamReset)):\n break\n\n async def close(self) -> None:\n await self.connection.close_stream(self.stream_id)\n", "path": "httpx/dispatch/http2.py"}], "after_files": [{"content": "import typing\n\nimport h2.connection\nimport h2.events\nfrom h2.config import H2Configuration\nfrom h2.settings import SettingCodes, Settings\n\nfrom ..concurrency.base import (\n BaseEvent,\n BaseSocketStream,\n ConcurrencyBackend,\n lookup_backend,\n)\nfrom ..config import Timeout\nfrom ..exceptions import ProtocolError\nfrom ..models import Request, Response\nfrom ..utils import get_logger\nfrom .base import OpenConnection\n\nlogger = get_logger(__name__)\n\n\nclass HTTP2Connection(OpenConnection):\n READ_NUM_BYTES = 4096\n CONFIG = H2Configuration(validate_inbound_headers=False)\n\n def __init__(\n self,\n socket: BaseSocketStream,\n backend: typing.Union[str, ConcurrencyBackend] = \"auto\",\n on_release: typing.Callable = None,\n ):\n self.socket = socket\n self.backend = lookup_backend(backend)\n self.on_release = on_release\n self.state = h2.connection.H2Connection(config=self.CONFIG)\n\n self.streams = {} # type: typing.Dict[int, HTTP2Stream]\n self.events = {} # type: typing.Dict[int, typing.List[h2.events.Event]]\n\n self.init_started = False\n\n @property\n def is_http2(self) -> bool:\n return True\n\n @property\n def init_complete(self) -> BaseEvent:\n # We do this lazily, to make sure backend autodetection always\n # runs within an async context.\n if not hasattr(self, \"_initialization_complete\"):\n self._initialization_complete = self.backend.create_event()\n return self._initialization_complete\n\n async def send(self, request: Request, timeout: Timeout = None) -> Response:\n timeout = Timeout() if timeout is None else timeout\n\n if not self.init_started:\n # The very first stream is responsible for initiating the connection.\n self.init_started = True\n await self.send_connection_init(timeout)\n stream_id = self.state.get_next_available_stream_id()\n self.init_complete.set()\n else:\n # All other streams need to wait until the connection is established.\n await self.init_complete.wait()\n stream_id = self.state.get_next_available_stream_id()\n\n stream = HTTP2Stream(stream_id=stream_id, connection=self)\n self.streams[stream_id] = stream\n self.events[stream_id] = []\n return await stream.send(request, timeout)\n\n async def send_connection_init(self, timeout: Timeout) -> None:\n \"\"\"\n The HTTP/2 connection requires some initial setup before we can start\n using individual request/response streams on it.\n \"\"\"\n\n # Need to set these manually here instead of manipulating via\n # __setitem__() otherwise the H2Connection will emit SettingsUpdate\n # frames in addition to sending the undesired defaults.\n self.state.local_settings = Settings(\n client=True,\n initial_values={\n # Disable PUSH_PROMISE frames from the server since we don't do anything\n # with them for now. Maybe when we support caching?\n SettingCodes.ENABLE_PUSH: 0,\n # These two are taken from h2 for safe defaults\n SettingCodes.MAX_CONCURRENT_STREAMS: 100,\n SettingCodes.MAX_HEADER_LIST_SIZE: 65536,\n },\n )\n\n # Some websites (*cough* Yahoo *cough*) balk at this setting being\n # present in the initial handshake since it's not defined in the original\n # RFC despite the RFC mandating ignoring settings you don't know about.\n del self.state.local_settings[h2.settings.SettingCodes.ENABLE_CONNECT_PROTOCOL]\n\n self.state.initiate_connection()\n self.state.increment_flow_control_window(2 ** 24)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n @property\n def is_closed(self) -> bool:\n return False\n\n def is_connection_dropped(self) -> bool:\n return self.socket.is_connection_dropped()\n\n async def close(self) -> None:\n await self.socket.close()\n\n async def wait_for_outgoing_flow(self, stream_id: int, timeout: Timeout) -> int:\n \"\"\"\n Returns the maximum allowable outgoing flow for a given stream.\n\n If the allowable flow is zero, then waits on the network until\n WindowUpdated frames have increased the flow rate.\n\n https://tools.ietf.org/html/rfc7540#section-6.9\n \"\"\"\n local_flow = self.state.local_flow_control_window(stream_id)\n connection_flow = self.state.max_outbound_frame_size\n flow = min(local_flow, connection_flow)\n while flow == 0:\n await self.receive_events(timeout)\n local_flow = self.state.local_flow_control_window(stream_id)\n connection_flow = self.state.max_outbound_frame_size\n flow = min(local_flow, connection_flow)\n return flow\n\n async def wait_for_event(self, stream_id: int, timeout: Timeout) -> h2.events.Event:\n \"\"\"\n Returns the next event for a given stream.\n\n If no events are available yet, then waits on the network until\n an event is available.\n \"\"\"\n while not self.events[stream_id]:\n await self.receive_events(timeout)\n return self.events[stream_id].pop(0)\n\n async def receive_events(self, timeout: Timeout) -> None:\n \"\"\"\n Read some data from the network, and update the H2 state.\n \"\"\"\n data = await self.socket.read(self.READ_NUM_BYTES, timeout)\n events = self.state.receive_data(data)\n for event in events:\n event_stream_id = getattr(event, \"stream_id\", 0)\n logger.trace(f\"receive_event stream_id={event_stream_id} event={event!r}\")\n\n if hasattr(event, \"error_code\"):\n raise ProtocolError(event)\n\n if event_stream_id in self.events:\n self.events[event_stream_id].append(event)\n\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def send_headers(\n self,\n stream_id: int,\n headers: typing.List[typing.Tuple[bytes, bytes]],\n timeout: Timeout,\n ) -> None:\n self.state.send_headers(stream_id, headers)\n self.state.increment_flow_control_window(2 ** 24, stream_id=stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def send_data(self, stream_id: int, chunk: bytes, timeout: Timeout) -> None:\n self.state.send_data(stream_id, chunk)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def end_stream(self, stream_id: int, timeout: Timeout) -> None:\n self.state.end_stream(stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def acknowledge_received_data(\n self, stream_id: int, amount: int, timeout: Timeout\n ) -> None:\n self.state.acknowledge_received_data(amount, stream_id)\n data_to_send = self.state.data_to_send()\n await self.socket.write(data_to_send, timeout)\n\n async def close_stream(self, stream_id: int) -> None:\n del self.streams[stream_id]\n del self.events[stream_id]\n\n if not self.streams and self.on_release is not None:\n await self.on_release()\n\n\nclass HTTP2Stream:\n def __init__(self, stream_id: int, connection: HTTP2Connection) -> None:\n self.stream_id = stream_id\n self.connection = connection\n\n async def send(self, request: Request, timeout: Timeout) -> Response:\n # Send the request.\n await self.send_headers(request, timeout)\n await self.send_body(request, timeout)\n\n # Receive the response.\n status_code, headers = await self.receive_response(timeout)\n content = self.body_iter(timeout)\n return Response(\n status_code=status_code,\n http_version=\"HTTP/2\",\n headers=headers,\n content=content,\n on_close=self.close,\n request=request,\n )\n\n async def send_headers(self, request: Request, timeout: Timeout) -> None:\n headers = [\n (b\":method\", request.method.encode(\"ascii\")),\n (b\":authority\", request.url.authority.encode(\"ascii\")),\n (b\":scheme\", request.url.scheme.encode(\"ascii\")),\n (b\":path\", request.url.full_path.encode(\"ascii\")),\n ] + [(k, v) for k, v in request.headers.raw if k != b\"host\"]\n\n logger.trace(\n f\"send_headers \"\n f\"stream_id={self.stream_id} \"\n f\"method={request.method!r} \"\n f\"target={request.url.full_path!r} \"\n f\"headers={headers!r}\"\n )\n await self.connection.send_headers(self.stream_id, headers, timeout)\n\n async def send_body(self, request: Request, timeout: Timeout) -> None:\n logger.trace(f\"send_body stream_id={self.stream_id}\")\n async for data in request.stream():\n while data:\n max_flow = await self.connection.wait_for_outgoing_flow(\n self.stream_id, timeout\n )\n chunk_size = min(len(data), max_flow)\n chunk, data = data[:chunk_size], data[chunk_size:]\n await self.connection.send_data(self.stream_id, chunk, timeout)\n\n await self.connection.end_stream(self.stream_id, timeout)\n\n async def receive_response(\n self, timeout: Timeout\n ) -> typing.Tuple[int, typing.List[typing.Tuple[bytes, bytes]]]:\n \"\"\"\n Read the response status and headers from the network.\n \"\"\"\n while True:\n event = await self.connection.wait_for_event(self.stream_id, timeout)\n if isinstance(event, h2.events.ResponseReceived):\n break\n\n status_code = 200\n headers = []\n for k, v in event.headers:\n if k == b\":status\":\n status_code = int(v.decode(\"ascii\", errors=\"ignore\"))\n elif not k.startswith(b\":\"):\n headers.append((k, v))\n\n return (status_code, headers)\n\n async def body_iter(self, timeout: Timeout) -> typing.AsyncIterator[bytes]:\n while True:\n event = await self.connection.wait_for_event(self.stream_id, timeout)\n if isinstance(event, h2.events.DataReceived):\n amount = event.flow_controlled_length\n await self.connection.acknowledge_received_data(\n self.stream_id, amount, timeout\n )\n yield event.data\n elif isinstance(event, (h2.events.StreamEnded, h2.events.StreamReset)):\n break\n\n async def close(self) -> None:\n await self.connection.close_stream(self.stream_id)\n", "path": "httpx/dispatch/http2.py"}]} | 3,612 | 243 |
gh_patches_debug_20492 | rasdani/github-patches | git_diff | frappe__frappe-23132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add jitter on scheduled jobs
hourly, daily long processes if all started at once can cause sudden increase in workload if you have many sites/benches.
Adding simple jitter to scheduled time can lessen the impact of such issues. Jitter is common pattern used for solving problems with "frequency" becomes a problem. E.g. gunicorn adds jitter to avoid restarting all workers at same time, profilers add jitter to avoid amplifying some pattern of repeated work. retry/backoff implementations also use to avoid creating patterns.
Possible implementation: When importing scheduled job types add some random delays in cron. E.g. daily jobs will start in the range of 12:00-12:15 AM instead of all starting at 12:00 AM.
Cons: Some jobs are required to be executed at specific times e.g. birthday reminders. So adding negative offset can introduce bugs for them, positive offset however should be fine AFAIK.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/core/doctype/scheduled_job_type/scheduled_job_type.py`
Content:
```
1 # Copyright (c) 2021, Frappe Technologies and contributors
2 # License: MIT. See LICENSE
3
4 import json
5 from datetime import datetime
6
7 import click
8 from croniter import croniter
9
10 import frappe
11 from frappe.model.document import Document
12 from frappe.utils import get_datetime, now_datetime
13 from frappe.utils.background_jobs import enqueue, is_job_enqueued
14
15
16 class ScheduledJobType(Document):
17 # begin: auto-generated types
18 # This code is auto-generated. Do not modify anything in this block.
19
20 from typing import TYPE_CHECKING
21
22 if TYPE_CHECKING:
23 from frappe.types import DF
24
25 create_log: DF.Check
26 cron_format: DF.Data | None
27 frequency: DF.Literal[
28 "All",
29 "Hourly",
30 "Hourly Long",
31 "Daily",
32 "Daily Long",
33 "Weekly",
34 "Weekly Long",
35 "Monthly",
36 "Monthly Long",
37 "Cron",
38 "Yearly",
39 "Annual",
40 ]
41 last_execution: DF.Datetime | None
42 method: DF.Data
43 next_execution: DF.Datetime | None
44 server_script: DF.Link | None
45 stopped: DF.Check
46 # end: auto-generated types
47 def autoname(self):
48 self.name = ".".join(self.method.split(".")[-2:])
49
50 def validate(self):
51 if self.frequency != "All":
52 # force logging for all events other than continuous ones (ALL)
53 self.create_log = 1
54
55 def enqueue(self, force=False) -> bool:
56 # enqueue event if last execution is done
57 if self.is_event_due() or force:
58 if not self.is_job_in_queue():
59 enqueue(
60 "frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job",
61 queue=self.get_queue_name(),
62 job_type=self.method,
63 job_id=self.rq_job_id,
64 )
65 return True
66 else:
67 frappe.logger("scheduler").error(
68 f"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}"
69 )
70
71 return False
72
73 def is_event_due(self, current_time=None):
74 """Return true if event is due based on time lapsed since last execution"""
75 # if the next scheduled event is before NOW, then its due!
76 return self.get_next_execution() <= (current_time or now_datetime())
77
78 def is_job_in_queue(self) -> bool:
79 return is_job_enqueued(self.rq_job_id)
80
81 @property
82 def rq_job_id(self):
83 """Unique ID created to deduplicate jobs with single RQ call."""
84 return f"scheduled_job::{self.method}"
85
86 @property
87 def next_execution(self):
88 return self.get_next_execution()
89
90 def get_next_execution(self):
91 CRON_MAP = {
92 "Yearly": "0 0 1 1 *",
93 "Annual": "0 0 1 1 *",
94 "Monthly": "0 0 1 * *",
95 "Monthly Long": "0 0 1 * *",
96 "Weekly": "0 0 * * 0",
97 "Weekly Long": "0 0 * * 0",
98 "Daily": "0 0 * * *",
99 "Daily Long": "0 0 * * *",
100 "Hourly": "0 * * * *",
101 "Hourly Long": "0 * * * *",
102 "All": f"*/{(frappe.get_conf().scheduler_interval or 240) // 60} * * * *",
103 }
104
105 if not self.cron_format:
106 self.cron_format = CRON_MAP[self.frequency]
107
108 # If this is a cold start then last_execution will not be set.
109 # Creation is set as fallback because if very old fallback is set job might trigger
110 # immediately, even when it's meant to be daily.
111 # A dynamic fallback like current time might miss the scheduler interval and job will never start.
112 last_execution = get_datetime(self.last_execution or self.creation)
113 return croniter(self.cron_format, last_execution).get_next(datetime)
114
115 def execute(self):
116 self.scheduler_log = None
117 try:
118 self.log_status("Start")
119 if self.server_script:
120 script_name = frappe.db.get_value("Server Script", self.server_script)
121 if script_name:
122 frappe.get_doc("Server Script", script_name).execute_scheduled_method()
123 else:
124 frappe.get_attr(self.method)()
125 frappe.db.commit()
126 self.log_status("Complete")
127 except Exception:
128 frappe.db.rollback()
129 self.log_status("Failed")
130
131 def log_status(self, status):
132 # log file
133 frappe.logger("scheduler").info(f"Scheduled Job {status}: {self.method} for {frappe.local.site}")
134 self.update_scheduler_log(status)
135
136 def update_scheduler_log(self, status):
137 if not self.create_log:
138 # self.get_next_execution will work properly iff self.last_execution is properly set
139 if self.frequency == "All" and status == "Start":
140 self.db_set("last_execution", now_datetime(), update_modified=False)
141 frappe.db.commit()
142 return
143 if not self.scheduler_log:
144 self.scheduler_log = frappe.get_doc(
145 dict(doctype="Scheduled Job Log", scheduled_job_type=self.name)
146 ).insert(ignore_permissions=True)
147 self.scheduler_log.db_set("status", status)
148 if frappe.debug_log:
149 self.scheduler_log.db_set("debug_log", "\n".join(frappe.debug_log))
150 if status == "Failed":
151 self.scheduler_log.db_set("details", frappe.get_traceback())
152 if status == "Start":
153 self.db_set("last_execution", now_datetime(), update_modified=False)
154 frappe.db.commit()
155
156 def get_queue_name(self):
157 return "long" if ("Long" in self.frequency) else "default"
158
159 def on_trash(self):
160 frappe.db.delete("Scheduled Job Log", {"scheduled_job_type": self.name})
161
162
163 @frappe.whitelist()
164 def execute_event(doc: str):
165 frappe.only_for("System Manager")
166 doc = json.loads(doc)
167 frappe.get_doc("Scheduled Job Type", doc.get("name")).enqueue(force=True)
168 return doc
169
170
171 def run_scheduled_job(job_type: str):
172 """This is a wrapper function that runs a hooks.scheduler_events method"""
173 try:
174 frappe.get_doc("Scheduled Job Type", dict(method=job_type)).execute()
175 except Exception:
176 print(frappe.get_traceback())
177
178
179 def sync_jobs(hooks: dict = None):
180 frappe.reload_doc("core", "doctype", "scheduled_job_type")
181 scheduler_events = hooks or frappe.get_hooks("scheduler_events")
182 all_events = insert_events(scheduler_events)
183 clear_events(all_events)
184
185
186 def insert_events(scheduler_events: dict) -> list:
187 cron_jobs, event_jobs = [], []
188 for event_type in scheduler_events:
189 events = scheduler_events.get(event_type)
190 if isinstance(events, dict):
191 cron_jobs += insert_cron_jobs(events)
192 else:
193 # hourly, daily etc
194 event_jobs += insert_event_jobs(events, event_type)
195 return cron_jobs + event_jobs
196
197
198 def insert_cron_jobs(events: dict) -> list:
199 cron_jobs = []
200 for cron_format in events:
201 for event in events.get(cron_format):
202 cron_jobs.append(event)
203 insert_single_event("Cron", event, cron_format)
204 return cron_jobs
205
206
207 def insert_event_jobs(events: list, event_type: str) -> list:
208 event_jobs = []
209 for event in events:
210 event_jobs.append(event)
211 frequency = event_type.replace("_", " ").title()
212 insert_single_event(frequency, event)
213 return event_jobs
214
215
216 def insert_single_event(frequency: str, event: str, cron_format: str = None):
217 cron_expr = {"cron_format": cron_format} if cron_format else {}
218
219 try:
220 frappe.get_attr(event)
221 except Exception as e:
222 click.secho(f"{event} is not a valid method: {e}", fg="yellow")
223
224 doc = frappe.get_doc(
225 {
226 "doctype": "Scheduled Job Type",
227 "method": event,
228 "cron_format": cron_format,
229 "frequency": frequency,
230 }
231 )
232
233 if not frappe.db.exists(
234 "Scheduled Job Type", {"method": event, "frequency": frequency, **cron_expr}
235 ):
236 savepoint = "scheduled_job_type_creation"
237 try:
238 frappe.db.savepoint(savepoint)
239 doc.insert()
240 except frappe.DuplicateEntryError:
241 frappe.db.rollback(save_point=savepoint)
242 doc.delete()
243 doc.insert()
244
245
246 def clear_events(all_events: list):
247 for event in frappe.get_all("Scheduled Job Type", fields=["name", "method", "server_script"]):
248 is_server_script = event.server_script
249 is_defined_in_hooks = event.method in all_events
250
251 if not (is_defined_in_hooks or is_server_script):
252 frappe.delete_doc("Scheduled Job Type", event.name)
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
--- a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
+++ b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py
@@ -2,7 +2,8 @@
# License: MIT. See LICENSE
import json
-from datetime import datetime
+from datetime import datetime, timedelta
+from random import randint
import click
from croniter import croniter
@@ -110,7 +111,12 @@
# immediately, even when it's meant to be daily.
# A dynamic fallback like current time might miss the scheduler interval and job will never start.
last_execution = get_datetime(self.last_execution or self.creation)
- return croniter(self.cron_format, last_execution).get_next(datetime)
+ next_execution = croniter(self.cron_format, last_execution).get_next(datetime)
+
+ jitter = 0
+ if self.frequency in ("Hourly Long", "Daily Long"):
+ jitter = randint(1, 600)
+ return next_execution + timedelta(seconds=jitter)
def execute(self):
self.scheduler_log = None
| {"golden_diff": "diff --git a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n--- a/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n+++ b/frappe/core/doctype/scheduled_job_type/scheduled_job_type.py\n@@ -2,7 +2,8 @@\n # License: MIT. See LICENSE\n \n import json\n-from datetime import datetime\n+from datetime import datetime, timedelta\n+from random import randint\n \n import click\n from croniter import croniter\n@@ -110,7 +111,12 @@\n \t\t# immediately, even when it's meant to be daily.\n \t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n \t\tlast_execution = get_datetime(self.last_execution or self.creation)\n-\t\treturn croniter(self.cron_format, last_execution).get_next(datetime)\n+\t\tnext_execution = croniter(self.cron_format, last_execution).get_next(datetime)\n+\n+\t\tjitter = 0\n+\t\tif self.frequency in (\"Hourly Long\", \"Daily Long\"):\n+\t\t\tjitter = randint(1, 600)\n+\t\treturn next_execution + timedelta(seconds=jitter)\n \n \tdef execute(self):\n \t\tself.scheduler_log = None\n", "issue": "Add jitter on scheduled jobs\nhourly, daily long processes if all started at once can cause sudden increase in workload if you have many sites/benches. \r\n\r\n\r\nAdding simple jitter to scheduled time can lessen the impact of such issues. Jitter is common pattern used for solving problems with \"frequency\" becomes a problem. E.g. gunicorn adds jitter to avoid restarting all workers at same time, profilers add jitter to avoid amplifying some pattern of repeated work. retry/backoff implementations also use to avoid creating patterns.\r\n\r\n\r\nPossible implementation: When importing scheduled job types add some random delays in cron. E.g. daily jobs will start in the range of 12:00-12:15 AM instead of all starting at 12:00 AM.\r\n\r\n\r\nCons: Some jobs are required to be executed at specific times e.g. birthday reminders. So adding negative offset can introduce bugs for them, positive offset however should be fine AFAIK. \n", "before_files": [{"content": "# Copyright (c) 2021, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport json\nfrom datetime import datetime\n\nimport click\nfrom croniter import croniter\n\nimport frappe\nfrom frappe.model.document import Document\nfrom frappe.utils import get_datetime, now_datetime\nfrom frappe.utils.background_jobs import enqueue, is_job_enqueued\n\n\nclass ScheduledJobType(Document):\n\t# begin: auto-generated types\n\t# This code is auto-generated. Do not modify anything in this block.\n\n\tfrom typing import TYPE_CHECKING\n\n\tif TYPE_CHECKING:\n\t\tfrom frappe.types import DF\n\n\t\tcreate_log: DF.Check\n\t\tcron_format: DF.Data | None\n\t\tfrequency: DF.Literal[\n\t\t\t\"All\",\n\t\t\t\"Hourly\",\n\t\t\t\"Hourly Long\",\n\t\t\t\"Daily\",\n\t\t\t\"Daily Long\",\n\t\t\t\"Weekly\",\n\t\t\t\"Weekly Long\",\n\t\t\t\"Monthly\",\n\t\t\t\"Monthly Long\",\n\t\t\t\"Cron\",\n\t\t\t\"Yearly\",\n\t\t\t\"Annual\",\n\t\t]\n\t\tlast_execution: DF.Datetime | None\n\t\tmethod: DF.Data\n\t\tnext_execution: DF.Datetime | None\n\t\tserver_script: DF.Link | None\n\t\tstopped: DF.Check\n\t# end: auto-generated types\n\tdef autoname(self):\n\t\tself.name = \".\".join(self.method.split(\".\")[-2:])\n\n\tdef validate(self):\n\t\tif self.frequency != \"All\":\n\t\t\t# force logging for all events other than continuous ones (ALL)\n\t\t\tself.create_log = 1\n\n\tdef enqueue(self, force=False) -> bool:\n\t\t# enqueue event if last execution is done\n\t\tif self.is_event_due() or force:\n\t\t\tif not self.is_job_in_queue():\n\t\t\t\tenqueue(\n\t\t\t\t\t\"frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job\",\n\t\t\t\t\tqueue=self.get_queue_name(),\n\t\t\t\t\tjob_type=self.method,\n\t\t\t\t\tjob_id=self.rq_job_id,\n\t\t\t\t)\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\tfrappe.logger(\"scheduler\").error(\n\t\t\t\t\tf\"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}\"\n\t\t\t\t)\n\n\t\treturn False\n\n\tdef is_event_due(self, current_time=None):\n\t\t\"\"\"Return true if event is due based on time lapsed since last execution\"\"\"\n\t\t# if the next scheduled event is before NOW, then its due!\n\t\treturn self.get_next_execution() <= (current_time or now_datetime())\n\n\tdef is_job_in_queue(self) -> bool:\n\t\treturn is_job_enqueued(self.rq_job_id)\n\n\t@property\n\tdef rq_job_id(self):\n\t\t\"\"\"Unique ID created to deduplicate jobs with single RQ call.\"\"\"\n\t\treturn f\"scheduled_job::{self.method}\"\n\n\t@property\n\tdef next_execution(self):\n\t\treturn self.get_next_execution()\n\n\tdef get_next_execution(self):\n\t\tCRON_MAP = {\n\t\t\t\"Yearly\": \"0 0 1 1 *\",\n\t\t\t\"Annual\": \"0 0 1 1 *\",\n\t\t\t\"Monthly\": \"0 0 1 * *\",\n\t\t\t\"Monthly Long\": \"0 0 1 * *\",\n\t\t\t\"Weekly\": \"0 0 * * 0\",\n\t\t\t\"Weekly Long\": \"0 0 * * 0\",\n\t\t\t\"Daily\": \"0 0 * * *\",\n\t\t\t\"Daily Long\": \"0 0 * * *\",\n\t\t\t\"Hourly\": \"0 * * * *\",\n\t\t\t\"Hourly Long\": \"0 * * * *\",\n\t\t\t\"All\": f\"*/{(frappe.get_conf().scheduler_interval or 240) // 60} * * * *\",\n\t\t}\n\n\t\tif not self.cron_format:\n\t\t\tself.cron_format = CRON_MAP[self.frequency]\n\n\t\t# If this is a cold start then last_execution will not be set.\n\t\t# Creation is set as fallback because if very old fallback is set job might trigger\n\t\t# immediately, even when it's meant to be daily.\n\t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n\t\tlast_execution = get_datetime(self.last_execution or self.creation)\n\t\treturn croniter(self.cron_format, last_execution).get_next(datetime)\n\n\tdef execute(self):\n\t\tself.scheduler_log = None\n\t\ttry:\n\t\t\tself.log_status(\"Start\")\n\t\t\tif self.server_script:\n\t\t\t\tscript_name = frappe.db.get_value(\"Server Script\", self.server_script)\n\t\t\t\tif script_name:\n\t\t\t\t\tfrappe.get_doc(\"Server Script\", script_name).execute_scheduled_method()\n\t\t\telse:\n\t\t\t\tfrappe.get_attr(self.method)()\n\t\t\tfrappe.db.commit()\n\t\t\tself.log_status(\"Complete\")\n\t\texcept Exception:\n\t\t\tfrappe.db.rollback()\n\t\t\tself.log_status(\"Failed\")\n\n\tdef log_status(self, status):\n\t\t# log file\n\t\tfrappe.logger(\"scheduler\").info(f\"Scheduled Job {status}: {self.method} for {frappe.local.site}\")\n\t\tself.update_scheduler_log(status)\n\n\tdef update_scheduler_log(self, status):\n\t\tif not self.create_log:\n\t\t\t# self.get_next_execution will work properly iff self.last_execution is properly set\n\t\t\tif self.frequency == \"All\" and status == \"Start\":\n\t\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\t\t\tfrappe.db.commit()\n\t\t\treturn\n\t\tif not self.scheduler_log:\n\t\t\tself.scheduler_log = frappe.get_doc(\n\t\t\t\tdict(doctype=\"Scheduled Job Log\", scheduled_job_type=self.name)\n\t\t\t).insert(ignore_permissions=True)\n\t\tself.scheduler_log.db_set(\"status\", status)\n\t\tif frappe.debug_log:\n\t\t\tself.scheduler_log.db_set(\"debug_log\", \"\\n\".join(frappe.debug_log))\n\t\tif status == \"Failed\":\n\t\t\tself.scheduler_log.db_set(\"details\", frappe.get_traceback())\n\t\tif status == \"Start\":\n\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\tfrappe.db.commit()\n\n\tdef get_queue_name(self):\n\t\treturn \"long\" if (\"Long\" in self.frequency) else \"default\"\n\n\tdef on_trash(self):\n\t\tfrappe.db.delete(\"Scheduled Job Log\", {\"scheduled_job_type\": self.name})\n\n\[email protected]()\ndef execute_event(doc: str):\n\tfrappe.only_for(\"System Manager\")\n\tdoc = json.loads(doc)\n\tfrappe.get_doc(\"Scheduled Job Type\", doc.get(\"name\")).enqueue(force=True)\n\treturn doc\n\n\ndef run_scheduled_job(job_type: str):\n\t\"\"\"This is a wrapper function that runs a hooks.scheduler_events method\"\"\"\n\ttry:\n\t\tfrappe.get_doc(\"Scheduled Job Type\", dict(method=job_type)).execute()\n\texcept Exception:\n\t\tprint(frappe.get_traceback())\n\n\ndef sync_jobs(hooks: dict = None):\n\tfrappe.reload_doc(\"core\", \"doctype\", \"scheduled_job_type\")\n\tscheduler_events = hooks or frappe.get_hooks(\"scheduler_events\")\n\tall_events = insert_events(scheduler_events)\n\tclear_events(all_events)\n\n\ndef insert_events(scheduler_events: dict) -> list:\n\tcron_jobs, event_jobs = [], []\n\tfor event_type in scheduler_events:\n\t\tevents = scheduler_events.get(event_type)\n\t\tif isinstance(events, dict):\n\t\t\tcron_jobs += insert_cron_jobs(events)\n\t\telse:\n\t\t\t# hourly, daily etc\n\t\t\tevent_jobs += insert_event_jobs(events, event_type)\n\treturn cron_jobs + event_jobs\n\n\ndef insert_cron_jobs(events: dict) -> list:\n\tcron_jobs = []\n\tfor cron_format in events:\n\t\tfor event in events.get(cron_format):\n\t\t\tcron_jobs.append(event)\n\t\t\tinsert_single_event(\"Cron\", event, cron_format)\n\treturn cron_jobs\n\n\ndef insert_event_jobs(events: list, event_type: str) -> list:\n\tevent_jobs = []\n\tfor event in events:\n\t\tevent_jobs.append(event)\n\t\tfrequency = event_type.replace(\"_\", \" \").title()\n\t\tinsert_single_event(frequency, event)\n\treturn event_jobs\n\n\ndef insert_single_event(frequency: str, event: str, cron_format: str = None):\n\tcron_expr = {\"cron_format\": cron_format} if cron_format else {}\n\n\ttry:\n\t\tfrappe.get_attr(event)\n\texcept Exception as e:\n\t\tclick.secho(f\"{event} is not a valid method: {e}\", fg=\"yellow\")\n\n\tdoc = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Scheduled Job Type\",\n\t\t\t\"method\": event,\n\t\t\t\"cron_format\": cron_format,\n\t\t\t\"frequency\": frequency,\n\t\t}\n\t)\n\n\tif not frappe.db.exists(\n\t\t\"Scheduled Job Type\", {\"method\": event, \"frequency\": frequency, **cron_expr}\n\t):\n\t\tsavepoint = \"scheduled_job_type_creation\"\n\t\ttry:\n\t\t\tfrappe.db.savepoint(savepoint)\n\t\t\tdoc.insert()\n\t\texcept frappe.DuplicateEntryError:\n\t\t\tfrappe.db.rollback(save_point=savepoint)\n\t\t\tdoc.delete()\n\t\t\tdoc.insert()\n\n\ndef clear_events(all_events: list):\n\tfor event in frappe.get_all(\"Scheduled Job Type\", fields=[\"name\", \"method\", \"server_script\"]):\n\t\tis_server_script = event.server_script\n\t\tis_defined_in_hooks = event.method in all_events\n\n\t\tif not (is_defined_in_hooks or is_server_script):\n\t\t\tfrappe.delete_doc(\"Scheduled Job Type\", event.name)\n", "path": "frappe/core/doctype/scheduled_job_type/scheduled_job_type.py"}], "after_files": [{"content": "# Copyright (c) 2021, Frappe Technologies and contributors\n# License: MIT. See LICENSE\n\nimport json\nfrom datetime import datetime, timedelta\nfrom random import randint\n\nimport click\nfrom croniter import croniter\n\nimport frappe\nfrom frappe.model.document import Document\nfrom frappe.utils import get_datetime, now_datetime\nfrom frappe.utils.background_jobs import enqueue, is_job_enqueued\n\n\nclass ScheduledJobType(Document):\n\t# begin: auto-generated types\n\t# This code is auto-generated. Do not modify anything in this block.\n\n\tfrom typing import TYPE_CHECKING\n\n\tif TYPE_CHECKING:\n\t\tfrom frappe.types import DF\n\n\t\tcreate_log: DF.Check\n\t\tcron_format: DF.Data | None\n\t\tfrequency: DF.Literal[\n\t\t\t\"All\",\n\t\t\t\"Hourly\",\n\t\t\t\"Hourly Long\",\n\t\t\t\"Daily\",\n\t\t\t\"Daily Long\",\n\t\t\t\"Weekly\",\n\t\t\t\"Weekly Long\",\n\t\t\t\"Monthly\",\n\t\t\t\"Monthly Long\",\n\t\t\t\"Cron\",\n\t\t\t\"Yearly\",\n\t\t\t\"Annual\",\n\t\t]\n\t\tlast_execution: DF.Datetime | None\n\t\tmethod: DF.Data\n\t\tnext_execution: DF.Datetime | None\n\t\tserver_script: DF.Link | None\n\t\tstopped: DF.Check\n\t# end: auto-generated types\n\tdef autoname(self):\n\t\tself.name = \".\".join(self.method.split(\".\")[-2:])\n\n\tdef validate(self):\n\t\tif self.frequency != \"All\":\n\t\t\t# force logging for all events other than continuous ones (ALL)\n\t\t\tself.create_log = 1\n\n\tdef enqueue(self, force=False) -> bool:\n\t\t# enqueue event if last execution is done\n\t\tif self.is_event_due() or force:\n\t\t\tif not self.is_job_in_queue():\n\t\t\t\tenqueue(\n\t\t\t\t\t\"frappe.core.doctype.scheduled_job_type.scheduled_job_type.run_scheduled_job\",\n\t\t\t\t\tqueue=self.get_queue_name(),\n\t\t\t\t\tjob_type=self.method,\n\t\t\t\t\tjob_id=self.rq_job_id,\n\t\t\t\t)\n\t\t\t\treturn True\n\t\t\telse:\n\t\t\t\tfrappe.logger(\"scheduler\").error(\n\t\t\t\t\tf\"Skipped queueing {self.method} because it was found in queue for {frappe.local.site}\"\n\t\t\t\t)\n\n\t\treturn False\n\n\tdef is_event_due(self, current_time=None):\n\t\t\"\"\"Return true if event is due based on time lapsed since last execution\"\"\"\n\t\t# if the next scheduled event is before NOW, then its due!\n\t\treturn self.get_next_execution() <= (current_time or now_datetime())\n\n\tdef is_job_in_queue(self) -> bool:\n\t\treturn is_job_enqueued(self.rq_job_id)\n\n\t@property\n\tdef rq_job_id(self):\n\t\t\"\"\"Unique ID created to deduplicate jobs with single RQ call.\"\"\"\n\t\treturn f\"scheduled_job::{self.method}\"\n\n\t@property\n\tdef next_execution(self):\n\t\treturn self.get_next_execution()\n\n\tdef get_next_execution(self):\n\t\tCRON_MAP = {\n\t\t\t\"Yearly\": \"0 0 1 1 *\",\n\t\t\t\"Annual\": \"0 0 1 1 *\",\n\t\t\t\"Monthly\": \"0 0 1 * *\",\n\t\t\t\"Monthly Long\": \"0 0 1 * *\",\n\t\t\t\"Weekly\": \"0 0 * * 0\",\n\t\t\t\"Weekly Long\": \"0 0 * * 0\",\n\t\t\t\"Daily\": \"0 0 * * *\",\n\t\t\t\"Daily Long\": \"0 0 * * *\",\n\t\t\t\"Hourly\": \"0 * * * *\",\n\t\t\t\"Hourly Long\": \"0 * * * *\",\n\t\t\t\"All\": f\"*/{(frappe.get_conf().scheduler_interval or 240) // 60} * * * *\",\n\t\t}\n\n\t\tif not self.cron_format:\n\t\t\tself.cron_format = CRON_MAP[self.frequency]\n\n\t\t# If this is a cold start then last_execution will not be set.\n\t\t# Creation is set as fallback because if very old fallback is set job might trigger\n\t\t# immediately, even when it's meant to be daily.\n\t\t# A dynamic fallback like current time might miss the scheduler interval and job will never start.\n\t\tlast_execution = get_datetime(self.last_execution or self.creation)\n\t\tnext_execution = croniter(self.cron_format, last_execution).get_next(datetime)\n\n\t\tjitter = 0\n\t\tif self.frequency in (\"Hourly Long\", \"Daily Long\"):\n\t\t\tjitter = randint(1, 600)\n\t\treturn next_execution + timedelta(seconds=jitter)\n\n\tdef execute(self):\n\t\tself.scheduler_log = None\n\t\ttry:\n\t\t\tself.log_status(\"Start\")\n\t\t\tif self.server_script:\n\t\t\t\tscript_name = frappe.db.get_value(\"Server Script\", self.server_script)\n\t\t\t\tif script_name:\n\t\t\t\t\tfrappe.get_doc(\"Server Script\", script_name).execute_scheduled_method()\n\t\t\telse:\n\t\t\t\tfrappe.get_attr(self.method)()\n\t\t\tfrappe.db.commit()\n\t\t\tself.log_status(\"Complete\")\n\t\texcept Exception:\n\t\t\tfrappe.db.rollback()\n\t\t\tself.log_status(\"Failed\")\n\n\tdef log_status(self, status):\n\t\t# log file\n\t\tfrappe.logger(\"scheduler\").info(f\"Scheduled Job {status}: {self.method} for {frappe.local.site}\")\n\t\tself.update_scheduler_log(status)\n\n\tdef update_scheduler_log(self, status):\n\t\tif not self.create_log:\n\t\t\t# self.get_next_execution will work properly iff self.last_execution is properly set\n\t\t\tif self.frequency == \"All\" and status == \"Start\":\n\t\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\t\t\tfrappe.db.commit()\n\t\t\treturn\n\t\tif not self.scheduler_log:\n\t\t\tself.scheduler_log = frappe.get_doc(\n\t\t\t\tdict(doctype=\"Scheduled Job Log\", scheduled_job_type=self.name)\n\t\t\t).insert(ignore_permissions=True)\n\t\tself.scheduler_log.db_set(\"status\", status)\n\t\tif status == \"Failed\":\n\t\t\tself.scheduler_log.db_set(\"details\", frappe.get_traceback())\n\t\tif status == \"Start\":\n\t\t\tself.db_set(\"last_execution\", now_datetime(), update_modified=False)\n\t\tfrappe.db.commit()\n\n\tdef get_queue_name(self):\n\t\treturn \"long\" if (\"Long\" in self.frequency) else \"default\"\n\n\tdef on_trash(self):\n\t\tfrappe.db.delete(\"Scheduled Job Log\", {\"scheduled_job_type\": self.name})\n\n\[email protected]()\ndef execute_event(doc: str):\n\tfrappe.only_for(\"System Manager\")\n\tdoc = json.loads(doc)\n\tfrappe.get_doc(\"Scheduled Job Type\", doc.get(\"name\")).enqueue(force=True)\n\treturn doc\n\n\ndef run_scheduled_job(job_type: str):\n\t\"\"\"This is a wrapper function that runs a hooks.scheduler_events method\"\"\"\n\ttry:\n\t\tfrappe.get_doc(\"Scheduled Job Type\", dict(method=job_type)).execute()\n\texcept Exception:\n\t\tprint(frappe.get_traceback())\n\n\ndef sync_jobs(hooks: dict = None):\n\tfrappe.reload_doc(\"core\", \"doctype\", \"scheduled_job_type\")\n\tscheduler_events = hooks or frappe.get_hooks(\"scheduler_events\")\n\tall_events = insert_events(scheduler_events)\n\tclear_events(all_events)\n\n\ndef insert_events(scheduler_events: dict) -> list:\n\tcron_jobs, event_jobs = [], []\n\tfor event_type in scheduler_events:\n\t\tevents = scheduler_events.get(event_type)\n\t\tif isinstance(events, dict):\n\t\t\tcron_jobs += insert_cron_jobs(events)\n\t\telse:\n\t\t\t# hourly, daily etc\n\t\t\tevent_jobs += insert_event_jobs(events, event_type)\n\treturn cron_jobs + event_jobs\n\n\ndef insert_cron_jobs(events: dict) -> list:\n\tcron_jobs = []\n\tfor cron_format in events:\n\t\tfor event in events.get(cron_format):\n\t\t\tcron_jobs.append(event)\n\t\t\tinsert_single_event(\"Cron\", event, cron_format)\n\treturn cron_jobs\n\n\ndef insert_event_jobs(events: list, event_type: str) -> list:\n\tevent_jobs = []\n\tfor event in events:\n\t\tevent_jobs.append(event)\n\t\tfrequency = event_type.replace(\"_\", \" \").title()\n\t\tinsert_single_event(frequency, event)\n\treturn event_jobs\n\n\ndef insert_single_event(frequency: str, event: str, cron_format: str = None):\n\tcron_expr = {\"cron_format\": cron_format} if cron_format else {}\n\n\ttry:\n\t\tfrappe.get_attr(event)\n\texcept Exception as e:\n\t\tclick.secho(f\"{event} is not a valid method: {e}\", fg=\"yellow\")\n\n\tdoc = frappe.get_doc(\n\t\t{\n\t\t\t\"doctype\": \"Scheduled Job Type\",\n\t\t\t\"method\": event,\n\t\t\t\"cron_format\": cron_format,\n\t\t\t\"frequency\": frequency,\n\t\t}\n\t)\n\n\tif not frappe.db.exists(\n\t\t\"Scheduled Job Type\", {\"method\": event, \"frequency\": frequency, **cron_expr}\n\t):\n\t\tsavepoint = \"scheduled_job_type_creation\"\n\t\ttry:\n\t\t\tfrappe.db.savepoint(savepoint)\n\t\t\tdoc.insert()\n\t\texcept frappe.DuplicateEntryError:\n\t\t\tfrappe.db.rollback(save_point=savepoint)\n\t\t\tdoc.delete()\n\t\t\tdoc.insert()\n\n\ndef clear_events(all_events: list):\n\tfor event in frappe.get_all(\"Scheduled Job Type\", fields=[\"name\", \"method\", \"server_script\"]):\n\t\tis_server_script = event.server_script\n\t\tis_defined_in_hooks = event.method in all_events\n\n\t\tif not (is_defined_in_hooks or is_server_script):\n\t\t\tfrappe.delete_doc(\"Scheduled Job Type\", event.name)\n", "path": "frappe/core/doctype/scheduled_job_type/scheduled_job_type.py"}]} | 3,176 | 278 |
gh_patches_debug_33430 | rasdani/github-patches | git_diff | Mailu__Mailu-1268 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make roundcube log to the console
Roundcube is currently hard to debug because it logs into a special folder (/var/www/html/logs). It should log to stdout/stderr instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webmails/roundcube/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import logging as log
5 import sys
6 from socrate import conf
7
8 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
9
10 os.environ["MAX_FILESIZE"] = str(int(int(os.environ.get("MESSAGE_SIZE_LIMIT"))*0.66/1048576))
11
12 conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/roundcube.ini")
13
14 # Fix some permissions
15 os.system("mkdir -p /data/gpg")
16 os.system("chown -R www-data:www-data /data")
17
18 # Run apache
19 os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/webmails/roundcube/start.py b/webmails/roundcube/start.py
--- a/webmails/roundcube/start.py
+++ b/webmails/roundcube/start.py
@@ -4,16 +4,61 @@
import logging as log
import sys
from socrate import conf
+import subprocess
log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
os.environ["MAX_FILESIZE"] = str(int(int(os.environ.get("MESSAGE_SIZE_LIMIT"))*0.66/1048576))
+db_flavor=os.environ.get("ROUNDCUBE_DB_FLAVOR",os.environ.get("DB_FLAVOR","sqlite"))
+if db_flavor=="sqlite":
+ os.environ["DB_DSNW"]="sqlite:////data/roundcube.db"
+elif db_flavor=="mysql":
+ os.environ["DB_DSNW"]="mysql://%s:%s@%s/%s" % (
+ os.environ.get("ROUNDCUBE_DB_USER","roundcube"),
+ os.environ.get("ROUNDCUBE_DB_PW"),
+ os.environ.get("ROUNDCUBE_DB_HOST",os.environ.get("DB_HOST","database")),
+ os.environ.get("ROUNDCUBE_DB_NAME","roundcube")
+ )
+elif db_flavor=="postgresql":
+ os.environ["DB_DSNW"]="pgsql://%s:%s@%s/%s" % (
+ os.environ.get("ROUNDCUBE_DB_USER","roundcube"),
+ os.environ.get("ROUNDCUBE_DB_PW"),
+ os.environ.get("ROUNDCUBE_DB_HOST",os.environ.get("DB_HOST","database")),
+ os.environ.get("ROUNDCUBE_DB_NAME","roundcube")
+ )
+else:
+ print("Unknown ROUNDCUBE_DB_FLAVOR: %s",db_flavor)
+ exit(1)
+
+
+
conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/roundcube.ini")
# Fix some permissions
-os.system("mkdir -p /data/gpg")
-os.system("chown -R www-data:www-data /data")
+os.system("mkdir -p /data/gpg /var/www/html/logs")
+os.system("touch /var/www/html/logs/errors")
+os.system("chown -R www-data:www-data /data /var/www/html/logs")
+
+try:
+ print("Initializing database")
+ result=subprocess.check_output(["/var/www/html/bin/initdb.sh","--dir","/var/www/html/SQL"],stderr=subprocess.STDOUT)
+ print(result.decode())
+except subprocess.CalledProcessError as e:
+ if "already exists" in e.stdout.decode():
+ print("Already initialzed")
+ else:
+ print(e.stdout.decode())
+ quit(1)
+
+try:
+ print("Upgrading database")
+ subprocess.check_call(["/var/www/html/bin/update.sh","--version=?","-y"],stderr=subprocess.STDOUT)
+except subprocess.CalledProcessError as e:
+ quit(1)
+
+# Tail roundcube logs
+subprocess.Popen(["tail","-f","-n","0","/var/www/html/logs/errors"])
# Run apache
os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
| {"golden_diff": "diff --git a/webmails/roundcube/start.py b/webmails/roundcube/start.py\n--- a/webmails/roundcube/start.py\n+++ b/webmails/roundcube/start.py\n@@ -4,16 +4,61 @@\n import logging as log\n import sys\n from socrate import conf\n+import subprocess\n \n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n \n os.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n \n+db_flavor=os.environ.get(\"ROUNDCUBE_DB_FLAVOR\",os.environ.get(\"DB_FLAVOR\",\"sqlite\"))\n+if db_flavor==\"sqlite\":\n+ os.environ[\"DB_DSNW\"]=\"sqlite:////data/roundcube.db\"\n+elif db_flavor==\"mysql\":\n+ os.environ[\"DB_DSNW\"]=\"mysql://%s:%s@%s/%s\" % (\n+ os.environ.get(\"ROUNDCUBE_DB_USER\",\"roundcube\"),\n+ os.environ.get(\"ROUNDCUBE_DB_PW\"),\n+ os.environ.get(\"ROUNDCUBE_DB_HOST\",os.environ.get(\"DB_HOST\",\"database\")),\n+ os.environ.get(\"ROUNDCUBE_DB_NAME\",\"roundcube\")\n+ )\n+elif db_flavor==\"postgresql\":\n+ os.environ[\"DB_DSNW\"]=\"pgsql://%s:%s@%s/%s\" % (\n+ os.environ.get(\"ROUNDCUBE_DB_USER\",\"roundcube\"),\n+ os.environ.get(\"ROUNDCUBE_DB_PW\"),\n+ os.environ.get(\"ROUNDCUBE_DB_HOST\",os.environ.get(\"DB_HOST\",\"database\")),\n+ os.environ.get(\"ROUNDCUBE_DB_NAME\",\"roundcube\")\n+ )\n+else:\n+ print(\"Unknown ROUNDCUBE_DB_FLAVOR: %s\",db_flavor)\n+ exit(1)\n+\n+\n+\n conf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/roundcube.ini\")\n \n # Fix some permissions\n-os.system(\"mkdir -p /data/gpg\")\n-os.system(\"chown -R www-data:www-data /data\")\n+os.system(\"mkdir -p /data/gpg /var/www/html/logs\")\n+os.system(\"touch /var/www/html/logs/errors\")\n+os.system(\"chown -R www-data:www-data /data /var/www/html/logs\")\n+\n+try:\n+ print(\"Initializing database\")\n+ result=subprocess.check_output([\"/var/www/html/bin/initdb.sh\",\"--dir\",\"/var/www/html/SQL\"],stderr=subprocess.STDOUT)\n+ print(result.decode())\n+except subprocess.CalledProcessError as e:\n+ if \"already exists\" in e.stdout.decode():\n+ print(\"Already initialzed\")\n+ else:\n+ print(e.stdout.decode())\n+ quit(1)\n+\n+try:\n+ print(\"Upgrading database\")\n+ subprocess.check_call([\"/var/www/html/bin/update.sh\",\"--version=?\",\"-y\"],stderr=subprocess.STDOUT)\n+except subprocess.CalledProcessError as e:\n+ quit(1)\n+\n+# Tail roundcube logs\n+subprocess.Popen([\"tail\",\"-f\",\"-n\",\"0\",\"/var/www/html/logs/errors\"])\n \n # Run apache\n os.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n", "issue": "Make roundcube log to the console\nRoundcube is currently hard to debug because it logs into a special folder (/var/www/html/logs). It should log to stdout/stderr instead.\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\nos.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n\nconf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/roundcube.ini\")\n\n# Fix some permissions\nos.system(\"mkdir -p /data/gpg\")\nos.system(\"chown -R www-data:www-data /data\")\n\n# Run apache\nos.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n", "path": "webmails/roundcube/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import conf\nimport subprocess\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\nos.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n\ndb_flavor=os.environ.get(\"ROUNDCUBE_DB_FLAVOR\",os.environ.get(\"DB_FLAVOR\",\"sqlite\"))\nif db_flavor==\"sqlite\":\n os.environ[\"DB_DSNW\"]=\"sqlite:////data/roundcube.db\"\nelif db_flavor==\"mysql\":\n os.environ[\"DB_DSNW\"]=\"mysql://%s:%s@%s/%s\" % (\n os.environ.get(\"ROUNDCUBE_DB_USER\",\"roundcube\"),\n os.environ.get(\"ROUNDCUBE_DB_PW\"),\n os.environ.get(\"ROUNDCUBE_DB_HOST\",os.environ.get(\"DB_HOST\",\"database\")),\n os.environ.get(\"ROUNDCUBE_DB_NAME\",\"roundcube\")\n )\nelif db_flavor==\"postgresql\":\n os.environ[\"DB_DSNW\"]=\"pgsql://%s:%s@%s/%s\" % (\n os.environ.get(\"ROUNDCUBE_DB_USER\",\"roundcube\"),\n os.environ.get(\"ROUNDCUBE_DB_PW\"),\n os.environ.get(\"ROUNDCUBE_DB_HOST\",os.environ.get(\"DB_HOST\",\"database\")),\n os.environ.get(\"ROUNDCUBE_DB_NAME\",\"roundcube\")\n )\nelse:\n print(\"Unknown ROUNDCUBE_DB_FLAVOR: %s\",db_flavor)\n exit(1)\n\n\n\nconf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/roundcube.ini\")\n\n# Fix some permissions\nos.system(\"mkdir -p /data/gpg /var/www/html/logs\")\nos.system(\"touch /var/www/html/logs/errors\")\nos.system(\"chown -R www-data:www-data /data /var/www/html/logs\")\n\ntry:\n print(\"Initializing database\")\n result=subprocess.check_output([\"/var/www/html/bin/initdb.sh\",\"--dir\",\"/var/www/html/SQL\"],stderr=subprocess.STDOUT)\n print(result.decode())\nexcept subprocess.CalledProcessError as e:\n if \"already exists\" in e.stdout.decode():\n print(\"Already initialzed\")\n else:\n print(e.stdout.decode())\n quit(1)\n\ntry:\n print(\"Upgrading database\")\n subprocess.check_call([\"/var/www/html/bin/update.sh\",\"--version=?\",\"-y\"],stderr=subprocess.STDOUT)\nexcept subprocess.CalledProcessError as e:\n quit(1)\n\n# Tail roundcube logs\nsubprocess.Popen([\"tail\",\"-f\",\"-n\",\"0\",\"/var/www/html/logs/errors\"])\n\n# Run apache\nos.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n", "path": "webmails/roundcube/start.py"}]} | 486 | 721 |
gh_patches_debug_36050 | rasdani/github-patches | git_diff | carpentries__amy-228 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bulk-upload: Not checking whether it is a valid csv file
When a file with first line in valid .csv format and if the following lines are not in a valid format,
for example if the file uploaded is :
"""
personal,middle,family,email
This is a test file
"""
There is no error being displayed for the wrong file uploaded, instead a ticket is being displayed. I think there should be a check to validate csv file before trying to upload the data.
Am I right?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/util.py`
Content:
```
1 # coding: utf-8
2 from math import pi, sin, cos, acos
3 import csv
4
5 from django.core.exceptions import ObjectDoesNotExist
6 from django.db import IntegrityError, transaction
7
8 from .models import Event, Role, Person, Task
9
10
11 class InternalError(Exception):
12 pass
13
14
15 def earth_distance(pos1, pos2):
16 '''Taken from http://www.johndcook.com/python_longitude_latitude.html.'''
17
18 # Extract fields.
19 lat1, long1 = pos1
20 lat2, long2 = pos2
21
22 # Convert latitude and longitude to spherical coordinates in radians.
23 degrees_to_radians = pi/180.0
24
25 # phi = 90 - latitude
26 phi1 = (90.0 - lat1) * degrees_to_radians
27 phi2 = (90.0 - lat2) * degrees_to_radians
28
29 # theta = longitude
30 theta1 = long1 * degrees_to_radians
31 theta2 = long2 * degrees_to_radians
32
33 # Compute spherical distance from spherical coordinates.
34 # For two locations in spherical coordinates
35 # (1, theta, phi) and (1, theta, phi)
36 # cosine( arc length ) = sin phi sin phi' cos(theta-theta') + cos phi cos phi'
37 # distance = rho * arc length
38 c = sin(phi1) * sin(phi2) * cos(theta1 - theta2) + cos(phi1) * cos(phi2)
39 arc = acos(c)
40
41 # Multiply by 6373 to get distance in km.
42 return arc * 6373
43
44
45 def upload_person_task_csv(stream):
46 """Read people from CSV and return a JSON-serializable list of dicts.
47
48 The input `stream` should be a file-like object that returns
49 Unicode data.
50
51 "Serializability" is required because we put this data into session. See
52 https://docs.djangoproject.com/en/1.7/topics/http/sessions/ for details.
53
54 Also return a list of fields from Person.PERSON_UPLOAD_FIELDS for which
55 no data was given.
56 """
57
58 result = []
59 reader = csv.DictReader(stream)
60 empty_fields = set()
61
62 for row in reader:
63 entry = {}
64 for col in Person.PERSON_UPLOAD_FIELDS:
65 if col in row:
66 entry[col] = row[col].strip()
67 else:
68 entry[col] = None
69 empty_fields.add(col)
70
71 for col in Person.PERSON_TASK_EXTRA_FIELDS:
72 entry[col] = row.get(col, None)
73 entry['errors'] = None
74
75 result.append(entry)
76
77 return result, list(empty_fields)
78
79
80 def verify_upload_person_task(data):
81 """
82 Verify that uploaded data is correct. Show errors by populating ``errors``
83 dictionary item. This function changes ``data`` in place.
84 """
85
86 errors_occur = False
87 for item in data:
88 errors = []
89
90 event = item.get('event', None)
91 if event:
92 try:
93 Event.objects.get(slug=event)
94 except Event.DoesNotExist:
95 errors.append(u'Event with slug {0} does not exist.'
96 .format(event))
97
98 role = item.get('role', None)
99 if role:
100 try:
101 Role.objects.get(name=role)
102 except Role.DoesNotExist:
103 errors.append(u'Role with name {0} does not exist.'
104 .format(role))
105 except Role.MultipleObjectsReturned:
106 errors.append(u'More than one role named {0} exists.'
107 .format(role))
108
109 # check if the user exists, and if so: check if existing user's
110 # personal and family names are the same as uploaded
111 email = item.get('email', None)
112 personal = item.get('personal', None)
113 middle = item.get('middle', None)
114 family = item.get('family', None)
115 person = None
116 if email:
117 # we don't have to check if the user exists in the database
118 # but we should check if, in case the email matches, family and
119 # personal names match, too
120
121 try:
122 person = Person.objects.get(email__iexact=email)
123
124 assert person.personal == personal
125 assert person.middle == middle
126 assert person.family == family
127
128 except Person.DoesNotExist:
129 # in this case we need to add the user
130 pass
131
132 except AssertionError:
133 errors.append(
134 "Personal, middle or family name of existing user don't"
135 " match: {0} vs {1}, {2} vs {3}, {4} vs {5}"
136 .format(personal, person.personal, middle, person.middle,
137 family, person.family)
138 )
139
140 if person:
141 if not any([event, role]):
142 errors.append("User exists but no event and role to assign"
143 " the user to was provided")
144
145 else:
146 # check for duplicate Task
147 try:
148 Task.objects.get(event__slug=event, role__name=role,
149 person=person)
150 except Task.DoesNotExist:
151 pass
152 else:
153 errors.append("Existing person {2} already has role {0}"
154 " in event {1}".format(role, event, person))
155
156 if (event and not role) or (role and not event):
157 errors.append("Must have both or either of event ({0}) and role"
158 " ({1})".format(event, role))
159
160 if errors:
161 errors_occur = True
162 item['errors'] = errors
163
164 return errors_occur
165
166
167 def create_uploaded_persons_tasks(data):
168 """
169 Create persons and tasks from upload data.
170 """
171
172 # Quick sanity check.
173 if any([row.get('errors') for row in data]):
174 raise InternalError('Uploaded data contains errors, cancelling upload')
175
176 persons_created = []
177 tasks_created = []
178 with transaction.atomic():
179 for row in data:
180 try:
181 fields = {key: row[key] for key in Person.PERSON_UPLOAD_FIELDS}
182 fields['username'] = create_username(row['personal'],
183 row['family'])
184 if fields['email']:
185 # we should use existing Person or create one
186 p, created = Person.objects.get_or_create(
187 email=fields['email'], defaults=fields
188 )
189
190 if created:
191 persons_created.append(p)
192
193 else:
194 # we should create a new Person without any email provided
195 p = Person(**fields)
196 p.save()
197 persons_created.append(p)
198
199 if row['event'] and row['role']:
200 e = Event.objects.get(slug=row['event'])
201 r = Role.objects.get(name=row['role'])
202 t = Task(person=p, event=e, role=r)
203 t.save()
204 tasks_created.append(t)
205
206 except IntegrityError as e:
207 raise IntegrityError('{0} (for {1})'.format(str(e), row))
208
209 except ObjectDoesNotExist as e:
210 raise ObjectDoesNotExist('{0} (for {1})'.format(str(e), row))
211
212 return persons_created, tasks_created
213
214
215 def create_username(personal, family):
216 '''Generate unique username.'''
217 stem = normalize_name(family) + '.' + normalize_name(personal)
218 counter = None
219 while True:
220 try:
221 if counter is None:
222 username = stem
223 counter = 1
224 else:
225 counter += 1
226 username = '{0}.{1}'.format(stem, counter)
227 Person.objects.get(username=username)
228 except ObjectDoesNotExist:
229 break
230
231 if any([ord(c) >= 128 for c in username]):
232 raise InternalError('Normalized username still contains non-normal '
233 'characters "{0}"'.format(username))
234
235 return username
236
237
238 def normalize_name(name):
239 '''Get rid of spaces, funky characters, etc.'''
240 name = name.strip()
241 for (accented, flat) in [(' ', '-')]:
242 name = name.replace(accented, flat)
243
244 # We should use lower-cased username, because it directly corresponds to
245 # some files Software Carpentry stores about some people - and, as we know,
246 # some filesystems are not case-sensitive.
247 return name.lower()
248
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/workshops/util.py b/workshops/util.py
--- a/workshops/util.py
+++ b/workshops/util.py
@@ -62,9 +62,12 @@
for row in reader:
entry = {}
for col in Person.PERSON_UPLOAD_FIELDS:
- if col in row:
+ try:
entry[col] = row[col].strip()
- else:
+ except (KeyError, IndexError, AttributeError):
+ # either `col` is not in `entry`, or not in `row`, or
+ # `.strip()` doesn't work (e.g. `row[col]` gives `None` instead
+ # of string)
entry[col] = None
empty_fields.add(col)
@@ -139,20 +142,9 @@
if person:
if not any([event, role]):
- errors.append("User exists but no event and role to assign"
+ errors.append("User exists but no event and role to assign to"
" the user to was provided")
- else:
- # check for duplicate Task
- try:
- Task.objects.get(event__slug=event, role__name=role,
- person=person)
- except Task.DoesNotExist:
- pass
- else:
- errors.append("Existing person {2} already has role {0}"
- " in event {1}".format(role, event, person))
-
if (event and not role) or (role and not event):
errors.append("Must have both or either of event ({0}) and role"
" ({1})".format(event, role))
@@ -199,9 +191,10 @@
if row['event'] and row['role']:
e = Event.objects.get(slug=row['event'])
r = Role.objects.get(name=row['role'])
- t = Task(person=p, event=e, role=r)
- t.save()
- tasks_created.append(t)
+ t, created = Task.objects.get_or_create(person=p, event=e,
+ role=r)
+ if created:
+ tasks_created.append(t)
except IntegrityError as e:
raise IntegrityError('{0} (for {1})'.format(str(e), row))
| {"golden_diff": "diff --git a/workshops/util.py b/workshops/util.py\n--- a/workshops/util.py\n+++ b/workshops/util.py\n@@ -62,9 +62,12 @@\n for row in reader:\n entry = {}\n for col in Person.PERSON_UPLOAD_FIELDS:\n- if col in row:\n+ try:\n entry[col] = row[col].strip()\n- else:\n+ except (KeyError, IndexError, AttributeError):\n+ # either `col` is not in `entry`, or not in `row`, or\n+ # `.strip()` doesn't work (e.g. `row[col]` gives `None` instead\n+ # of string)\n entry[col] = None\n empty_fields.add(col)\n \n@@ -139,20 +142,9 @@\n \n if person:\n if not any([event, role]):\n- errors.append(\"User exists but no event and role to assign\"\n+ errors.append(\"User exists but no event and role to assign to\"\n \" the user to was provided\")\n \n- else:\n- # check for duplicate Task\n- try:\n- Task.objects.get(event__slug=event, role__name=role,\n- person=person)\n- except Task.DoesNotExist:\n- pass\n- else:\n- errors.append(\"Existing person {2} already has role {0}\"\n- \" in event {1}\".format(role, event, person))\n-\n if (event and not role) or (role and not event):\n errors.append(\"Must have both or either of event ({0}) and role\"\n \" ({1})\".format(event, role))\n@@ -199,9 +191,10 @@\n if row['event'] and row['role']:\n e = Event.objects.get(slug=row['event'])\n r = Role.objects.get(name=row['role'])\n- t = Task(person=p, event=e, role=r)\n- t.save()\n- tasks_created.append(t)\n+ t, created = Task.objects.get_or_create(person=p, event=e,\n+ role=r)\n+ if created:\n+ tasks_created.append(t)\n \n except IntegrityError as e:\n raise IntegrityError('{0} (for {1})'.format(str(e), row))\n", "issue": "Bulk-upload: Not checking whether it is a valid csv file\nWhen a file with first line in valid .csv format and if the following lines are not in a valid format, \n\nfor example if the file uploaded is :\n\"\"\"\npersonal,middle,family,email\nThis is a test file\n\"\"\"\n\nThere is no error being displayed for the wrong file uploaded, instead a ticket is being displayed. I think there should be a check to validate csv file before trying to upload the data.\nAm I right?\n\n\n", "before_files": [{"content": "# coding: utf-8\nfrom math import pi, sin, cos, acos\nimport csv\n\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\n\nfrom .models import Event, Role, Person, Task\n\n\nclass InternalError(Exception):\n pass\n\n\ndef earth_distance(pos1, pos2):\n '''Taken from http://www.johndcook.com/python_longitude_latitude.html.'''\n\n # Extract fields.\n lat1, long1 = pos1\n lat2, long2 = pos2\n\n # Convert latitude and longitude to spherical coordinates in radians.\n degrees_to_radians = pi/180.0\n\n # phi = 90 - latitude\n phi1 = (90.0 - lat1) * degrees_to_radians\n phi2 = (90.0 - lat2) * degrees_to_radians\n\n # theta = longitude\n theta1 = long1 * degrees_to_radians\n theta2 = long2 * degrees_to_radians\n\n # Compute spherical distance from spherical coordinates.\n # For two locations in spherical coordinates\n # (1, theta, phi) and (1, theta, phi)\n # cosine( arc length ) = sin phi sin phi' cos(theta-theta') + cos phi cos phi'\n # distance = rho * arc length\n c = sin(phi1) * sin(phi2) * cos(theta1 - theta2) + cos(phi1) * cos(phi2)\n arc = acos(c)\n\n # Multiply by 6373 to get distance in km.\n return arc * 6373\n\n\ndef upload_person_task_csv(stream):\n \"\"\"Read people from CSV and return a JSON-serializable list of dicts.\n\n The input `stream` should be a file-like object that returns\n Unicode data.\n\n \"Serializability\" is required because we put this data into session. See\n https://docs.djangoproject.com/en/1.7/topics/http/sessions/ for details.\n\n Also return a list of fields from Person.PERSON_UPLOAD_FIELDS for which\n no data was given.\n \"\"\"\n\n result = []\n reader = csv.DictReader(stream)\n empty_fields = set()\n\n for row in reader:\n entry = {}\n for col in Person.PERSON_UPLOAD_FIELDS:\n if col in row:\n entry[col] = row[col].strip()\n else:\n entry[col] = None\n empty_fields.add(col)\n\n for col in Person.PERSON_TASK_EXTRA_FIELDS:\n entry[col] = row.get(col, None)\n entry['errors'] = None\n\n result.append(entry)\n\n return result, list(empty_fields)\n\n\ndef verify_upload_person_task(data):\n \"\"\"\n Verify that uploaded data is correct. Show errors by populating ``errors``\n dictionary item. This function changes ``data`` in place.\n \"\"\"\n\n errors_occur = False\n for item in data:\n errors = []\n\n event = item.get('event', None)\n if event:\n try:\n Event.objects.get(slug=event)\n except Event.DoesNotExist:\n errors.append(u'Event with slug {0} does not exist.'\n .format(event))\n\n role = item.get('role', None)\n if role:\n try:\n Role.objects.get(name=role)\n except Role.DoesNotExist:\n errors.append(u'Role with name {0} does not exist.'\n .format(role))\n except Role.MultipleObjectsReturned:\n errors.append(u'More than one role named {0} exists.'\n .format(role))\n\n # check if the user exists, and if so: check if existing user's\n # personal and family names are the same as uploaded\n email = item.get('email', None)\n personal = item.get('personal', None)\n middle = item.get('middle', None)\n family = item.get('family', None)\n person = None\n if email:\n # we don't have to check if the user exists in the database\n # but we should check if, in case the email matches, family and\n # personal names match, too\n\n try:\n person = Person.objects.get(email__iexact=email)\n\n assert person.personal == personal\n assert person.middle == middle\n assert person.family == family\n\n except Person.DoesNotExist:\n # in this case we need to add the user\n pass\n\n except AssertionError:\n errors.append(\n \"Personal, middle or family name of existing user don't\"\n \" match: {0} vs {1}, {2} vs {3}, {4} vs {5}\"\n .format(personal, person.personal, middle, person.middle,\n family, person.family)\n )\n\n if person:\n if not any([event, role]):\n errors.append(\"User exists but no event and role to assign\"\n \" the user to was provided\")\n\n else:\n # check for duplicate Task\n try:\n Task.objects.get(event__slug=event, role__name=role,\n person=person)\n except Task.DoesNotExist:\n pass\n else:\n errors.append(\"Existing person {2} already has role {0}\"\n \" in event {1}\".format(role, event, person))\n\n if (event and not role) or (role and not event):\n errors.append(\"Must have both or either of event ({0}) and role\"\n \" ({1})\".format(event, role))\n\n if errors:\n errors_occur = True\n item['errors'] = errors\n\n return errors_occur\n\n\ndef create_uploaded_persons_tasks(data):\n \"\"\"\n Create persons and tasks from upload data.\n \"\"\"\n\n # Quick sanity check.\n if any([row.get('errors') for row in data]):\n raise InternalError('Uploaded data contains errors, cancelling upload')\n\n persons_created = []\n tasks_created = []\n with transaction.atomic():\n for row in data:\n try:\n fields = {key: row[key] for key in Person.PERSON_UPLOAD_FIELDS}\n fields['username'] = create_username(row['personal'],\n row['family'])\n if fields['email']:\n # we should use existing Person or create one\n p, created = Person.objects.get_or_create(\n email=fields['email'], defaults=fields\n )\n\n if created:\n persons_created.append(p)\n\n else:\n # we should create a new Person without any email provided\n p = Person(**fields)\n p.save()\n persons_created.append(p)\n\n if row['event'] and row['role']:\n e = Event.objects.get(slug=row['event'])\n r = Role.objects.get(name=row['role'])\n t = Task(person=p, event=e, role=r)\n t.save()\n tasks_created.append(t)\n\n except IntegrityError as e:\n raise IntegrityError('{0} (for {1})'.format(str(e), row))\n\n except ObjectDoesNotExist as e:\n raise ObjectDoesNotExist('{0} (for {1})'.format(str(e), row))\n\n return persons_created, tasks_created\n\n\ndef create_username(personal, family):\n '''Generate unique username.'''\n stem = normalize_name(family) + '.' + normalize_name(personal)\n counter = None\n while True:\n try:\n if counter is None:\n username = stem\n counter = 1\n else:\n counter += 1\n username = '{0}.{1}'.format(stem, counter)\n Person.objects.get(username=username)\n except ObjectDoesNotExist:\n break\n\n if any([ord(c) >= 128 for c in username]):\n raise InternalError('Normalized username still contains non-normal '\n 'characters \"{0}\"'.format(username))\n\n return username\n\n\ndef normalize_name(name):\n '''Get rid of spaces, funky characters, etc.'''\n name = name.strip()\n for (accented, flat) in [(' ', '-')]:\n name = name.replace(accented, flat)\n\n # We should use lower-cased username, because it directly corresponds to\n # some files Software Carpentry stores about some people - and, as we know,\n # some filesystems are not case-sensitive.\n return name.lower()\n", "path": "workshops/util.py"}], "after_files": [{"content": "# coding: utf-8\nfrom math import pi, sin, cos, acos\nimport csv\n\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import IntegrityError, transaction\n\nfrom .models import Event, Role, Person, Task\n\n\nclass InternalError(Exception):\n pass\n\n\ndef earth_distance(pos1, pos2):\n '''Taken from http://www.johndcook.com/python_longitude_latitude.html.'''\n\n # Extract fields.\n lat1, long1 = pos1\n lat2, long2 = pos2\n\n # Convert latitude and longitude to spherical coordinates in radians.\n degrees_to_radians = pi/180.0\n\n # phi = 90 - latitude\n phi1 = (90.0 - lat1) * degrees_to_radians\n phi2 = (90.0 - lat2) * degrees_to_radians\n\n # theta = longitude\n theta1 = long1 * degrees_to_radians\n theta2 = long2 * degrees_to_radians\n\n # Compute spherical distance from spherical coordinates.\n # For two locations in spherical coordinates\n # (1, theta, phi) and (1, theta, phi)\n # cosine( arc length ) = sin phi sin phi' cos(theta-theta') + cos phi cos phi'\n # distance = rho * arc length\n c = sin(phi1) * sin(phi2) * cos(theta1 - theta2) + cos(phi1) * cos(phi2)\n arc = acos(c)\n\n # Multiply by 6373 to get distance in km.\n return arc * 6373\n\n\ndef upload_person_task_csv(stream):\n \"\"\"Read people from CSV and return a JSON-serializable list of dicts.\n\n The input `stream` should be a file-like object that returns\n Unicode data.\n\n \"Serializability\" is required because we put this data into session. See\n https://docs.djangoproject.com/en/1.7/topics/http/sessions/ for details.\n\n Also return a list of fields from Person.PERSON_UPLOAD_FIELDS for which\n no data was given.\n \"\"\"\n\n result = []\n reader = csv.DictReader(stream)\n empty_fields = set()\n\n for row in reader:\n entry = {}\n for col in Person.PERSON_UPLOAD_FIELDS:\n try:\n entry[col] = row[col].strip()\n except (KeyError, IndexError, AttributeError):\n # either `col` is not in `entry`, or not in `row`, or\n # `.strip()` doesn't work (e.g. `row[col]` gives `None` instead\n # of string)\n entry[col] = None\n empty_fields.add(col)\n\n for col in Person.PERSON_TASK_EXTRA_FIELDS:\n entry[col] = row.get(col, None)\n entry['errors'] = None\n\n result.append(entry)\n\n return result, list(empty_fields)\n\n\ndef verify_upload_person_task(data):\n \"\"\"\n Verify that uploaded data is correct. Show errors by populating ``errors``\n dictionary item. This function changes ``data`` in place.\n \"\"\"\n\n errors_occur = False\n for item in data:\n errors = []\n\n event = item.get('event', None)\n if event:\n try:\n Event.objects.get(slug=event)\n except Event.DoesNotExist:\n errors.append(u'Event with slug {0} does not exist.'\n .format(event))\n\n role = item.get('role', None)\n if role:\n try:\n Role.objects.get(name=role)\n except Role.DoesNotExist:\n errors.append(u'Role with name {0} does not exist.'\n .format(role))\n except Role.MultipleObjectsReturned:\n errors.append(u'More than one role named {0} exists.'\n .format(role))\n\n # check if the user exists, and if so: check if existing user's\n # personal and family names are the same as uploaded\n email = item.get('email', None)\n personal = item.get('personal', None)\n middle = item.get('middle', None)\n family = item.get('family', None)\n person = None\n if email:\n # we don't have to check if the user exists in the database\n # but we should check if, in case the email matches, family and\n # personal names match, too\n\n try:\n person = Person.objects.get(email__iexact=email)\n\n assert person.personal == personal\n assert person.middle == middle\n assert person.family == family\n\n except Person.DoesNotExist:\n # in this case we need to add the user\n pass\n\n except AssertionError:\n errors.append(\n \"Personal, middle or family name of existing user don't\"\n \" match: {0} vs {1}, {2} vs {3}, {4} vs {5}\"\n .format(personal, person.personal, middle, person.middle,\n family, person.family)\n )\n\n if person:\n if not any([event, role]):\n errors.append(\"User exists but no event and role to assign to\"\n \" the user to was provided\")\n\n if (event and not role) or (role and not event):\n errors.append(\"Must have both or either of event ({0}) and role\"\n \" ({1})\".format(event, role))\n\n if errors:\n errors_occur = True\n item['errors'] = errors\n\n return errors_occur\n\n\ndef create_uploaded_persons_tasks(data):\n \"\"\"\n Create persons and tasks from upload data.\n \"\"\"\n\n # Quick sanity check.\n if any([row.get('errors') for row in data]):\n raise InternalError('Uploaded data contains errors, cancelling upload')\n\n persons_created = []\n tasks_created = []\n with transaction.atomic():\n for row in data:\n try:\n fields = {key: row[key] for key in Person.PERSON_UPLOAD_FIELDS}\n fields['username'] = create_username(row['personal'],\n row['family'])\n if fields['email']:\n # we should use existing Person or create one\n p, created = Person.objects.get_or_create(\n email=fields['email'], defaults=fields\n )\n\n if created:\n persons_created.append(p)\n\n else:\n # we should create a new Person without any email provided\n p = Person(**fields)\n p.save()\n persons_created.append(p)\n\n if row['event'] and row['role']:\n e = Event.objects.get(slug=row['event'])\n r = Role.objects.get(name=row['role'])\n t, created = Task.objects.get_or_create(person=p, event=e,\n role=r)\n if created:\n tasks_created.append(t)\n\n except IntegrityError as e:\n raise IntegrityError('{0} (for {1})'.format(str(e), row))\n\n except ObjectDoesNotExist as e:\n raise ObjectDoesNotExist('{0} (for {1})'.format(str(e), row))\n\n return persons_created, tasks_created\n\n\ndef create_username(personal, family):\n '''Generate unique username.'''\n stem = normalize_name(family) + '.' + normalize_name(personal)\n counter = None\n while True:\n try:\n if counter is None:\n username = stem\n counter = 1\n else:\n counter += 1\n username = '{0}.{1}'.format(stem, counter)\n Person.objects.get(username=username)\n except ObjectDoesNotExist:\n break\n\n if any([ord(c) >= 128 for c in username]):\n raise InternalError('Normalized username still contains non-normal '\n 'characters \"{0}\"'.format(username))\n\n return username\n\n\ndef normalize_name(name):\n '''Get rid of spaces, funky characters, etc.'''\n name = name.strip()\n for (accented, flat) in [(' ', '-')]:\n name = name.replace(accented, flat)\n\n # We should use lower-cased username, because it directly corresponds to\n # some files Software Carpentry stores about some people - and, as we know,\n # some filesystems are not case-sensitive.\n return name.lower()\n", "path": "workshops/util.py"}]} | 2,852 | 493 |
gh_patches_debug_56039 | rasdani/github-patches | git_diff | python__mypy-3593 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typing for @contextmanager doesn't play well with generic functions
```
from contextlib import contextmanager
from typing import TypeVar, Iterator
_T = TypeVar('_T')
@contextmanager
def yield_id(item):
# type: (_T) -> Iterator[_T]
yield item
with yield_id(1):
pass
```
... results in...
`example.py:11: error: Argument 1 to "yield_id" has incompatible type "int"; expected "_T"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mypy/plugin.py`
Content:
```
1 """Plugin system for extending mypy."""
2
3 from abc import abstractmethod
4 from typing import Callable, List, Tuple, Optional, NamedTuple, TypeVar
5
6 from mypy.nodes import Expression, StrExpr, IntExpr, UnaryExpr, Context
7 from mypy.types import (
8 Type, Instance, CallableType, TypedDictType, UnionType, NoneTyp, FunctionLike, TypeVarType,
9 AnyType, TypeList, UnboundType
10 )
11 from mypy.messages import MessageBuilder
12 from mypy.options import Options
13
14
15 class AnalyzerPluginInterface:
16 """Interface for accessing semantic analyzer functionality in plugins."""
17
18 @abstractmethod
19 def fail(self, msg: str, ctx: Context) -> None:
20 raise NotImplementedError
21
22 @abstractmethod
23 def named_type(self, name: str, args: List[Type]) -> Instance:
24 raise NotImplementedError
25
26 @abstractmethod
27 def analyze_type(self, typ: Type) -> Type:
28 raise NotImplementedError
29
30 @abstractmethod
31 def analyze_callable_args(self, arglist: TypeList) -> Optional[Tuple[List[Type],
32 List[int],
33 List[Optional[str]]]]:
34 raise NotImplementedError
35
36
37 # A context for a hook that semantically analyzes an unbound type.
38 AnalyzeTypeContext = NamedTuple(
39 'AnalyzeTypeContext', [
40 ('type', UnboundType), # Type to analyze
41 ('context', Context),
42 ('api', AnalyzerPluginInterface)])
43
44
45 class CheckerPluginInterface:
46 """Interface for accessing type checker functionality in plugins."""
47
48 msg = None # type: MessageBuilder
49
50 @abstractmethod
51 def named_generic_type(self, name: str, args: List[Type]) -> Instance:
52 raise NotImplementedError
53
54
55 # A context for a function hook that infers the return type of a function with
56 # a special signature.
57 #
58 # A no-op callback would just return the inferred return type, but a useful
59 # callback at least sometimes can infer a more precise type.
60 FunctionContext = NamedTuple(
61 'FunctionContext', [
62 ('arg_types', List[List[Type]]), # List of actual caller types for each formal argument
63 ('default_return_type', Type), # Return type inferred from signature
64 ('args', List[List[Expression]]), # Actual expressions for each formal argument
65 ('context', Context),
66 ('api', CheckerPluginInterface)])
67
68 # A context for a method signature hook that infers a better signature for a
69 # method. Note that argument types aren't available yet. If you need them,
70 # you have to use a method hook instead.
71 MethodSigContext = NamedTuple(
72 'MethodSigContext', [
73 ('type', Type), # Base object type for method call
74 ('args', List[List[Expression]]), # Actual expressions for each formal argument
75 ('default_signature', CallableType), # Original signature of the method
76 ('context', Context),
77 ('api', CheckerPluginInterface)])
78
79 # A context for a method hook that infers the return type of a method with a
80 # special signature.
81 #
82 # This is very similar to FunctionContext (only differences are documented).
83 MethodContext = NamedTuple(
84 'MethodContext', [
85 ('type', Type), # Base object type for method call
86 ('arg_types', List[List[Type]]),
87 ('default_return_type', Type),
88 ('args', List[List[Expression]]),
89 ('context', Context),
90 ('api', CheckerPluginInterface)])
91
92 # A context for an attribute type hook that infers the type of an attribute.
93 AttributeContext = NamedTuple(
94 'AttributeContext', [
95 ('type', Type), # Type of object with attribute
96 ('default_attr_type', Type), # Original attribute type
97 ('context', Context),
98 ('api', CheckerPluginInterface)])
99
100
101 class Plugin:
102 """Base class of all type checker plugins.
103
104 This defines a no-op plugin. Subclasses can override some methods to
105 provide some actual functionality.
106
107 All get_ methods are treated as pure functions (you should assume that
108 results might be cached).
109
110 Look at the comments of various *Context objects for descriptions of
111 various hooks.
112 """
113
114 def __init__(self, options: Options) -> None:
115 self.options = options
116 self.python_version = options.python_version
117
118 def get_type_analyze_hook(self, fullname: str
119 ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:
120 return None
121
122 def get_function_hook(self, fullname: str
123 ) -> Optional[Callable[[FunctionContext], Type]]:
124 return None
125
126 def get_method_signature_hook(self, fullname: str
127 ) -> Optional[Callable[[MethodSigContext], CallableType]]:
128 return None
129
130 def get_method_hook(self, fullname: str
131 ) -> Optional[Callable[[MethodContext], Type]]:
132 return None
133
134 def get_attribute_hook(self, fullname: str
135 ) -> Optional[Callable[[AttributeContext], Type]]:
136 return None
137
138 # TODO: metaclass / class decorator hook
139
140
141 T = TypeVar('T')
142
143
144 class ChainedPlugin(Plugin):
145 """A plugin that represents a sequence of chained plugins.
146
147 Each lookup method returns the hook for the first plugin that
148 reports a match.
149
150 This class should not be subclassed -- use Plugin as the base class
151 for all plugins.
152 """
153
154 # TODO: Support caching of lookup results (through a LRU cache, for example).
155
156 def __init__(self, options: Options, plugins: List[Plugin]) -> None:
157 """Initialize chained plugin.
158
159 Assume that the child plugins aren't mutated (results may be cached).
160 """
161 super().__init__(options)
162 self._plugins = plugins
163
164 def get_type_analyze_hook(self, fullname: str
165 ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:
166 return self._find_hook(lambda plugin: plugin.get_type_analyze_hook(fullname))
167
168 def get_function_hook(self, fullname: str
169 ) -> Optional[Callable[[FunctionContext], Type]]:
170 return self._find_hook(lambda plugin: plugin.get_function_hook(fullname))
171
172 def get_method_signature_hook(self, fullname: str
173 ) -> Optional[Callable[[MethodSigContext], CallableType]]:
174 return self._find_hook(lambda plugin: plugin.get_method_signature_hook(fullname))
175
176 def get_method_hook(self, fullname: str
177 ) -> Optional[Callable[[MethodContext], Type]]:
178 return self._find_hook(lambda plugin: plugin.get_method_hook(fullname))
179
180 def get_attribute_hook(self, fullname: str
181 ) -> Optional[Callable[[AttributeContext], Type]]:
182 return self._find_hook(lambda plugin: plugin.get_attribute_hook(fullname))
183
184 def _find_hook(self, lookup: Callable[[Plugin], T]) -> Optional[T]:
185 for plugin in self._plugins:
186 hook = lookup(plugin)
187 if hook:
188 return hook
189 return None
190
191
192 class DefaultPlugin(Plugin):
193 """Type checker plugin that is enabled by default."""
194
195 def get_function_hook(self, fullname: str
196 ) -> Optional[Callable[[FunctionContext], Type]]:
197 if fullname == 'contextlib.contextmanager':
198 return contextmanager_callback
199 elif fullname == 'builtins.open' and self.python_version[0] == 3:
200 return open_callback
201 return None
202
203 def get_method_signature_hook(self, fullname: str
204 ) -> Optional[Callable[[MethodSigContext], CallableType]]:
205 if fullname == 'typing.Mapping.get':
206 return typed_dict_get_signature_callback
207 return None
208
209 def get_method_hook(self, fullname: str
210 ) -> Optional[Callable[[MethodContext], Type]]:
211 if fullname == 'typing.Mapping.get':
212 return typed_dict_get_callback
213 elif fullname == 'builtins.int.__pow__':
214 return int_pow_callback
215 return None
216
217
218 def open_callback(ctx: FunctionContext) -> Type:
219 """Infer a better return type for 'open'.
220
221 Infer TextIO or BinaryIO as the return value if the mode argument is not
222 given or is a literal.
223 """
224 mode = None
225 if not ctx.arg_types or len(ctx.arg_types[1]) != 1:
226 mode = 'r'
227 elif isinstance(ctx.args[1][0], StrExpr):
228 mode = ctx.args[1][0].value
229 if mode is not None:
230 assert isinstance(ctx.default_return_type, Instance)
231 if 'b' in mode:
232 return ctx.api.named_generic_type('typing.BinaryIO', [])
233 else:
234 return ctx.api.named_generic_type('typing.TextIO', [])
235 return ctx.default_return_type
236
237
238 def contextmanager_callback(ctx: FunctionContext) -> Type:
239 """Infer a better return type for 'contextlib.contextmanager'."""
240 # Be defensive, just in case.
241 if ctx.arg_types and len(ctx.arg_types[0]) == 1:
242 arg_type = ctx.arg_types[0][0]
243 if (isinstance(arg_type, CallableType)
244 and isinstance(ctx.default_return_type, CallableType)):
245 # The stub signature doesn't preserve information about arguments so
246 # add them back here.
247 return ctx.default_return_type.copy_modified(
248 arg_types=arg_type.arg_types,
249 arg_kinds=arg_type.arg_kinds,
250 arg_names=arg_type.arg_names)
251 return ctx.default_return_type
252
253
254 def typed_dict_get_signature_callback(ctx: MethodSigContext) -> CallableType:
255 """Try to infer a better signature type for TypedDict.get.
256
257 This is used to get better type context for the second argument that
258 depends on a TypedDict value type.
259 """
260 signature = ctx.default_signature
261 if (isinstance(ctx.type, TypedDictType)
262 and len(ctx.args) == 2
263 and len(ctx.args[0]) == 1
264 and isinstance(ctx.args[0][0], StrExpr)
265 and len(signature.arg_types) == 2
266 and len(signature.variables) == 1):
267 key = ctx.args[0][0].value
268 value_type = ctx.type.items.get(key)
269 if value_type:
270 # Tweak the signature to include the value type as context. It's
271 # only needed for type inference since there's a union with a type
272 # variable that accepts everything.
273 tv = TypeVarType(signature.variables[0])
274 return signature.copy_modified(
275 arg_types=[signature.arg_types[0],
276 UnionType.make_simplified_union([value_type, tv])])
277 return signature
278
279
280 def typed_dict_get_callback(ctx: MethodContext) -> Type:
281 """Infer a precise return type for TypedDict.get with literal first argument."""
282 if (isinstance(ctx.type, TypedDictType)
283 and len(ctx.arg_types) >= 1
284 and len(ctx.arg_types[0]) == 1):
285 if isinstance(ctx.args[0][0], StrExpr):
286 key = ctx.args[0][0].value
287 value_type = ctx.type.items.get(key)
288 if value_type:
289 if len(ctx.arg_types) == 1:
290 return UnionType.make_simplified_union([value_type, NoneTyp()])
291 elif len(ctx.arg_types) == 2 and len(ctx.arg_types[1]) == 1:
292 return UnionType.make_simplified_union([value_type, ctx.arg_types[1][0]])
293 else:
294 ctx.api.msg.typeddict_item_name_not_found(ctx.type, key, ctx.context)
295 return AnyType()
296 return ctx.default_return_type
297
298
299 def int_pow_callback(ctx: MethodContext) -> Type:
300 """Infer a more precise return type for int.__pow__."""
301 if (len(ctx.arg_types) == 1
302 and len(ctx.arg_types[0]) == 1):
303 arg = ctx.args[0][0]
304 if isinstance(arg, IntExpr):
305 exponent = arg.value
306 elif isinstance(arg, UnaryExpr) and arg.op == '-' and isinstance(arg.expr, IntExpr):
307 exponent = -arg.expr.value
308 else:
309 # Right operand not an int literal or a negated literal -- give up.
310 return ctx.default_return_type
311 if exponent >= 0:
312 return ctx.api.named_generic_type('builtins.int', [])
313 else:
314 return ctx.api.named_generic_type('builtins.float', [])
315 return ctx.default_return_type
316
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mypy/plugin.py b/mypy/plugin.py
--- a/mypy/plugin.py
+++ b/mypy/plugin.py
@@ -247,7 +247,9 @@
return ctx.default_return_type.copy_modified(
arg_types=arg_type.arg_types,
arg_kinds=arg_type.arg_kinds,
- arg_names=arg_type.arg_names)
+ arg_names=arg_type.arg_names,
+ variables=arg_type.variables,
+ is_ellipsis_args=arg_type.is_ellipsis_args)
return ctx.default_return_type
| {"golden_diff": "diff --git a/mypy/plugin.py b/mypy/plugin.py\n--- a/mypy/plugin.py\n+++ b/mypy/plugin.py\n@@ -247,7 +247,9 @@\n return ctx.default_return_type.copy_modified(\n arg_types=arg_type.arg_types,\n arg_kinds=arg_type.arg_kinds,\n- arg_names=arg_type.arg_names)\n+ arg_names=arg_type.arg_names,\n+ variables=arg_type.variables,\n+ is_ellipsis_args=arg_type.is_ellipsis_args)\n return ctx.default_return_type\n", "issue": "Typing for @contextmanager doesn't play well with generic functions\n```\r\nfrom contextlib import contextmanager\r\nfrom typing import TypeVar, Iterator\r\n\r\n_T = TypeVar('_T')\r\n\r\n@contextmanager\r\ndef yield_id(item):\r\n # type: (_T) -> Iterator[_T]\r\n yield item\r\n\r\nwith yield_id(1):\r\n pass\r\n```\r\n\r\n... results in...\r\n\r\n`example.py:11: error: Argument 1 to \"yield_id\" has incompatible type \"int\"; expected \"_T\"`\r\n\n", "before_files": [{"content": "\"\"\"Plugin system for extending mypy.\"\"\"\n\nfrom abc import abstractmethod\nfrom typing import Callable, List, Tuple, Optional, NamedTuple, TypeVar\n\nfrom mypy.nodes import Expression, StrExpr, IntExpr, UnaryExpr, Context\nfrom mypy.types import (\n Type, Instance, CallableType, TypedDictType, UnionType, NoneTyp, FunctionLike, TypeVarType,\n AnyType, TypeList, UnboundType\n)\nfrom mypy.messages import MessageBuilder\nfrom mypy.options import Options\n\n\nclass AnalyzerPluginInterface:\n \"\"\"Interface for accessing semantic analyzer functionality in plugins.\"\"\"\n\n @abstractmethod\n def fail(self, msg: str, ctx: Context) -> None:\n raise NotImplementedError\n\n @abstractmethod\n def named_type(self, name: str, args: List[Type]) -> Instance:\n raise NotImplementedError\n\n @abstractmethod\n def analyze_type(self, typ: Type) -> Type:\n raise NotImplementedError\n\n @abstractmethod\n def analyze_callable_args(self, arglist: TypeList) -> Optional[Tuple[List[Type],\n List[int],\n List[Optional[str]]]]:\n raise NotImplementedError\n\n\n# A context for a hook that semantically analyzes an unbound type.\nAnalyzeTypeContext = NamedTuple(\n 'AnalyzeTypeContext', [\n ('type', UnboundType), # Type to analyze\n ('context', Context),\n ('api', AnalyzerPluginInterface)])\n\n\nclass CheckerPluginInterface:\n \"\"\"Interface for accessing type checker functionality in plugins.\"\"\"\n\n msg = None # type: MessageBuilder\n\n @abstractmethod\n def named_generic_type(self, name: str, args: List[Type]) -> Instance:\n raise NotImplementedError\n\n\n# A context for a function hook that infers the return type of a function with\n# a special signature.\n#\n# A no-op callback would just return the inferred return type, but a useful\n# callback at least sometimes can infer a more precise type.\nFunctionContext = NamedTuple(\n 'FunctionContext', [\n ('arg_types', List[List[Type]]), # List of actual caller types for each formal argument\n ('default_return_type', Type), # Return type inferred from signature\n ('args', List[List[Expression]]), # Actual expressions for each formal argument\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for a method signature hook that infers a better signature for a\n# method. Note that argument types aren't available yet. If you need them,\n# you have to use a method hook instead.\nMethodSigContext = NamedTuple(\n 'MethodSigContext', [\n ('type', Type), # Base object type for method call\n ('args', List[List[Expression]]), # Actual expressions for each formal argument\n ('default_signature', CallableType), # Original signature of the method\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for a method hook that infers the return type of a method with a\n# special signature.\n#\n# This is very similar to FunctionContext (only differences are documented).\nMethodContext = NamedTuple(\n 'MethodContext', [\n ('type', Type), # Base object type for method call\n ('arg_types', List[List[Type]]),\n ('default_return_type', Type),\n ('args', List[List[Expression]]),\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for an attribute type hook that infers the type of an attribute.\nAttributeContext = NamedTuple(\n 'AttributeContext', [\n ('type', Type), # Type of object with attribute\n ('default_attr_type', Type), # Original attribute type\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n\nclass Plugin:\n \"\"\"Base class of all type checker plugins.\n\n This defines a no-op plugin. Subclasses can override some methods to\n provide some actual functionality.\n\n All get_ methods are treated as pure functions (you should assume that\n results might be cached).\n\n Look at the comments of various *Context objects for descriptions of\n various hooks.\n \"\"\"\n\n def __init__(self, options: Options) -> None:\n self.options = options\n self.python_version = options.python_version\n\n def get_type_analyze_hook(self, fullname: str\n ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:\n return None\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n return None\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n return None\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n return None\n\n def get_attribute_hook(self, fullname: str\n ) -> Optional[Callable[[AttributeContext], Type]]:\n return None\n\n # TODO: metaclass / class decorator hook\n\n\nT = TypeVar('T')\n\n\nclass ChainedPlugin(Plugin):\n \"\"\"A plugin that represents a sequence of chained plugins.\n\n Each lookup method returns the hook for the first plugin that\n reports a match.\n\n This class should not be subclassed -- use Plugin as the base class\n for all plugins.\n \"\"\"\n\n # TODO: Support caching of lookup results (through a LRU cache, for example).\n\n def __init__(self, options: Options, plugins: List[Plugin]) -> None:\n \"\"\"Initialize chained plugin.\n\n Assume that the child plugins aren't mutated (results may be cached).\n \"\"\"\n super().__init__(options)\n self._plugins = plugins\n\n def get_type_analyze_hook(self, fullname: str\n ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_type_analyze_hook(fullname))\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_function_hook(fullname))\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n return self._find_hook(lambda plugin: plugin.get_method_signature_hook(fullname))\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_method_hook(fullname))\n\n def get_attribute_hook(self, fullname: str\n ) -> Optional[Callable[[AttributeContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_attribute_hook(fullname))\n\n def _find_hook(self, lookup: Callable[[Plugin], T]) -> Optional[T]:\n for plugin in self._plugins:\n hook = lookup(plugin)\n if hook:\n return hook\n return None\n\n\nclass DefaultPlugin(Plugin):\n \"\"\"Type checker plugin that is enabled by default.\"\"\"\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n if fullname == 'contextlib.contextmanager':\n return contextmanager_callback\n elif fullname == 'builtins.open' and self.python_version[0] == 3:\n return open_callback\n return None\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n if fullname == 'typing.Mapping.get':\n return typed_dict_get_signature_callback\n return None\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n if fullname == 'typing.Mapping.get':\n return typed_dict_get_callback\n elif fullname == 'builtins.int.__pow__':\n return int_pow_callback\n return None\n\n\ndef open_callback(ctx: FunctionContext) -> Type:\n \"\"\"Infer a better return type for 'open'.\n\n Infer TextIO or BinaryIO as the return value if the mode argument is not\n given or is a literal.\n \"\"\"\n mode = None\n if not ctx.arg_types or len(ctx.arg_types[1]) != 1:\n mode = 'r'\n elif isinstance(ctx.args[1][0], StrExpr):\n mode = ctx.args[1][0].value\n if mode is not None:\n assert isinstance(ctx.default_return_type, Instance)\n if 'b' in mode:\n return ctx.api.named_generic_type('typing.BinaryIO', [])\n else:\n return ctx.api.named_generic_type('typing.TextIO', [])\n return ctx.default_return_type\n\n\ndef contextmanager_callback(ctx: FunctionContext) -> Type:\n \"\"\"Infer a better return type for 'contextlib.contextmanager'.\"\"\"\n # Be defensive, just in case.\n if ctx.arg_types and len(ctx.arg_types[0]) == 1:\n arg_type = ctx.arg_types[0][0]\n if (isinstance(arg_type, CallableType)\n and isinstance(ctx.default_return_type, CallableType)):\n # The stub signature doesn't preserve information about arguments so\n # add them back here.\n return ctx.default_return_type.copy_modified(\n arg_types=arg_type.arg_types,\n arg_kinds=arg_type.arg_kinds,\n arg_names=arg_type.arg_names)\n return ctx.default_return_type\n\n\ndef typed_dict_get_signature_callback(ctx: MethodSigContext) -> CallableType:\n \"\"\"Try to infer a better signature type for TypedDict.get.\n\n This is used to get better type context for the second argument that\n depends on a TypedDict value type.\n \"\"\"\n signature = ctx.default_signature\n if (isinstance(ctx.type, TypedDictType)\n and len(ctx.args) == 2\n and len(ctx.args[0]) == 1\n and isinstance(ctx.args[0][0], StrExpr)\n and len(signature.arg_types) == 2\n and len(signature.variables) == 1):\n key = ctx.args[0][0].value\n value_type = ctx.type.items.get(key)\n if value_type:\n # Tweak the signature to include the value type as context. It's\n # only needed for type inference since there's a union with a type\n # variable that accepts everything.\n tv = TypeVarType(signature.variables[0])\n return signature.copy_modified(\n arg_types=[signature.arg_types[0],\n UnionType.make_simplified_union([value_type, tv])])\n return signature\n\n\ndef typed_dict_get_callback(ctx: MethodContext) -> Type:\n \"\"\"Infer a precise return type for TypedDict.get with literal first argument.\"\"\"\n if (isinstance(ctx.type, TypedDictType)\n and len(ctx.arg_types) >= 1\n and len(ctx.arg_types[0]) == 1):\n if isinstance(ctx.args[0][0], StrExpr):\n key = ctx.args[0][0].value\n value_type = ctx.type.items.get(key)\n if value_type:\n if len(ctx.arg_types) == 1:\n return UnionType.make_simplified_union([value_type, NoneTyp()])\n elif len(ctx.arg_types) == 2 and len(ctx.arg_types[1]) == 1:\n return UnionType.make_simplified_union([value_type, ctx.arg_types[1][0]])\n else:\n ctx.api.msg.typeddict_item_name_not_found(ctx.type, key, ctx.context)\n return AnyType()\n return ctx.default_return_type\n\n\ndef int_pow_callback(ctx: MethodContext) -> Type:\n \"\"\"Infer a more precise return type for int.__pow__.\"\"\"\n if (len(ctx.arg_types) == 1\n and len(ctx.arg_types[0]) == 1):\n arg = ctx.args[0][0]\n if isinstance(arg, IntExpr):\n exponent = arg.value\n elif isinstance(arg, UnaryExpr) and arg.op == '-' and isinstance(arg.expr, IntExpr):\n exponent = -arg.expr.value\n else:\n # Right operand not an int literal or a negated literal -- give up.\n return ctx.default_return_type\n if exponent >= 0:\n return ctx.api.named_generic_type('builtins.int', [])\n else:\n return ctx.api.named_generic_type('builtins.float', [])\n return ctx.default_return_type\n", "path": "mypy/plugin.py"}], "after_files": [{"content": "\"\"\"Plugin system for extending mypy.\"\"\"\n\nfrom abc import abstractmethod\nfrom typing import Callable, List, Tuple, Optional, NamedTuple, TypeVar\n\nfrom mypy.nodes import Expression, StrExpr, IntExpr, UnaryExpr, Context\nfrom mypy.types import (\n Type, Instance, CallableType, TypedDictType, UnionType, NoneTyp, FunctionLike, TypeVarType,\n AnyType, TypeList, UnboundType\n)\nfrom mypy.messages import MessageBuilder\nfrom mypy.options import Options\n\n\nclass AnalyzerPluginInterface:\n \"\"\"Interface for accessing semantic analyzer functionality in plugins.\"\"\"\n\n @abstractmethod\n def fail(self, msg: str, ctx: Context) -> None:\n raise NotImplementedError\n\n @abstractmethod\n def named_type(self, name: str, args: List[Type]) -> Instance:\n raise NotImplementedError\n\n @abstractmethod\n def analyze_type(self, typ: Type) -> Type:\n raise NotImplementedError\n\n @abstractmethod\n def analyze_callable_args(self, arglist: TypeList) -> Optional[Tuple[List[Type],\n List[int],\n List[Optional[str]]]]:\n raise NotImplementedError\n\n\n# A context for a hook that semantically analyzes an unbound type.\nAnalyzeTypeContext = NamedTuple(\n 'AnalyzeTypeContext', [\n ('type', UnboundType), # Type to analyze\n ('context', Context),\n ('api', AnalyzerPluginInterface)])\n\n\nclass CheckerPluginInterface:\n \"\"\"Interface for accessing type checker functionality in plugins.\"\"\"\n\n msg = None # type: MessageBuilder\n\n @abstractmethod\n def named_generic_type(self, name: str, args: List[Type]) -> Instance:\n raise NotImplementedError\n\n\n# A context for a function hook that infers the return type of a function with\n# a special signature.\n#\n# A no-op callback would just return the inferred return type, but a useful\n# callback at least sometimes can infer a more precise type.\nFunctionContext = NamedTuple(\n 'FunctionContext', [\n ('arg_types', List[List[Type]]), # List of actual caller types for each formal argument\n ('default_return_type', Type), # Return type inferred from signature\n ('args', List[List[Expression]]), # Actual expressions for each formal argument\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for a method signature hook that infers a better signature for a\n# method. Note that argument types aren't available yet. If you need them,\n# you have to use a method hook instead.\nMethodSigContext = NamedTuple(\n 'MethodSigContext', [\n ('type', Type), # Base object type for method call\n ('args', List[List[Expression]]), # Actual expressions for each formal argument\n ('default_signature', CallableType), # Original signature of the method\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for a method hook that infers the return type of a method with a\n# special signature.\n#\n# This is very similar to FunctionContext (only differences are documented).\nMethodContext = NamedTuple(\n 'MethodContext', [\n ('type', Type), # Base object type for method call\n ('arg_types', List[List[Type]]),\n ('default_return_type', Type),\n ('args', List[List[Expression]]),\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n# A context for an attribute type hook that infers the type of an attribute.\nAttributeContext = NamedTuple(\n 'AttributeContext', [\n ('type', Type), # Type of object with attribute\n ('default_attr_type', Type), # Original attribute type\n ('context', Context),\n ('api', CheckerPluginInterface)])\n\n\nclass Plugin:\n \"\"\"Base class of all type checker plugins.\n\n This defines a no-op plugin. Subclasses can override some methods to\n provide some actual functionality.\n\n All get_ methods are treated as pure functions (you should assume that\n results might be cached).\n\n Look at the comments of various *Context objects for descriptions of\n various hooks.\n \"\"\"\n\n def __init__(self, options: Options) -> None:\n self.options = options\n self.python_version = options.python_version\n\n def get_type_analyze_hook(self, fullname: str\n ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:\n return None\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n return None\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n return None\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n return None\n\n def get_attribute_hook(self, fullname: str\n ) -> Optional[Callable[[AttributeContext], Type]]:\n return None\n\n # TODO: metaclass / class decorator hook\n\n\nT = TypeVar('T')\n\n\nclass ChainedPlugin(Plugin):\n \"\"\"A plugin that represents a sequence of chained plugins.\n\n Each lookup method returns the hook for the first plugin that\n reports a match.\n\n This class should not be subclassed -- use Plugin as the base class\n for all plugins.\n \"\"\"\n\n # TODO: Support caching of lookup results (through a LRU cache, for example).\n\n def __init__(self, options: Options, plugins: List[Plugin]) -> None:\n \"\"\"Initialize chained plugin.\n\n Assume that the child plugins aren't mutated (results may be cached).\n \"\"\"\n super().__init__(options)\n self._plugins = plugins\n\n def get_type_analyze_hook(self, fullname: str\n ) -> Optional[Callable[[AnalyzeTypeContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_type_analyze_hook(fullname))\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_function_hook(fullname))\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n return self._find_hook(lambda plugin: plugin.get_method_signature_hook(fullname))\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_method_hook(fullname))\n\n def get_attribute_hook(self, fullname: str\n ) -> Optional[Callable[[AttributeContext], Type]]:\n return self._find_hook(lambda plugin: plugin.get_attribute_hook(fullname))\n\n def _find_hook(self, lookup: Callable[[Plugin], T]) -> Optional[T]:\n for plugin in self._plugins:\n hook = lookup(plugin)\n if hook:\n return hook\n return None\n\n\nclass DefaultPlugin(Plugin):\n \"\"\"Type checker plugin that is enabled by default.\"\"\"\n\n def get_function_hook(self, fullname: str\n ) -> Optional[Callable[[FunctionContext], Type]]:\n if fullname == 'contextlib.contextmanager':\n return contextmanager_callback\n elif fullname == 'builtins.open' and self.python_version[0] == 3:\n return open_callback\n return None\n\n def get_method_signature_hook(self, fullname: str\n ) -> Optional[Callable[[MethodSigContext], CallableType]]:\n if fullname == 'typing.Mapping.get':\n return typed_dict_get_signature_callback\n return None\n\n def get_method_hook(self, fullname: str\n ) -> Optional[Callable[[MethodContext], Type]]:\n if fullname == 'typing.Mapping.get':\n return typed_dict_get_callback\n elif fullname == 'builtins.int.__pow__':\n return int_pow_callback\n return None\n\n\ndef open_callback(ctx: FunctionContext) -> Type:\n \"\"\"Infer a better return type for 'open'.\n\n Infer TextIO or BinaryIO as the return value if the mode argument is not\n given or is a literal.\n \"\"\"\n mode = None\n if not ctx.arg_types or len(ctx.arg_types[1]) != 1:\n mode = 'r'\n elif isinstance(ctx.args[1][0], StrExpr):\n mode = ctx.args[1][0].value\n if mode is not None:\n assert isinstance(ctx.default_return_type, Instance)\n if 'b' in mode:\n return ctx.api.named_generic_type('typing.BinaryIO', [])\n else:\n return ctx.api.named_generic_type('typing.TextIO', [])\n return ctx.default_return_type\n\n\ndef contextmanager_callback(ctx: FunctionContext) -> Type:\n \"\"\"Infer a better return type for 'contextlib.contextmanager'.\"\"\"\n # Be defensive, just in case.\n if ctx.arg_types and len(ctx.arg_types[0]) == 1:\n arg_type = ctx.arg_types[0][0]\n if (isinstance(arg_type, CallableType)\n and isinstance(ctx.default_return_type, CallableType)):\n # The stub signature doesn't preserve information about arguments so\n # add them back here.\n return ctx.default_return_type.copy_modified(\n arg_types=arg_type.arg_types,\n arg_kinds=arg_type.arg_kinds,\n arg_names=arg_type.arg_names,\n variables=arg_type.variables,\n is_ellipsis_args=arg_type.is_ellipsis_args)\n return ctx.default_return_type\n\n\ndef typed_dict_get_signature_callback(ctx: MethodSigContext) -> CallableType:\n \"\"\"Try to infer a better signature type for TypedDict.get.\n\n This is used to get better type context for the second argument that\n depends on a TypedDict value type.\n \"\"\"\n signature = ctx.default_signature\n if (isinstance(ctx.type, TypedDictType)\n and len(ctx.args) == 2\n and len(ctx.args[0]) == 1\n and isinstance(ctx.args[0][0], StrExpr)\n and len(signature.arg_types) == 2\n and len(signature.variables) == 1):\n key = ctx.args[0][0].value\n value_type = ctx.type.items.get(key)\n if value_type:\n # Tweak the signature to include the value type as context. It's\n # only needed for type inference since there's a union with a type\n # variable that accepts everything.\n tv = TypeVarType(signature.variables[0])\n return signature.copy_modified(\n arg_types=[signature.arg_types[0],\n UnionType.make_simplified_union([value_type, tv])])\n return signature\n\n\ndef typed_dict_get_callback(ctx: MethodContext) -> Type:\n \"\"\"Infer a precise return type for TypedDict.get with literal first argument.\"\"\"\n if (isinstance(ctx.type, TypedDictType)\n and len(ctx.arg_types) >= 1\n and len(ctx.arg_types[0]) == 1):\n if isinstance(ctx.args[0][0], StrExpr):\n key = ctx.args[0][0].value\n value_type = ctx.type.items.get(key)\n if value_type:\n if len(ctx.arg_types) == 1:\n return UnionType.make_simplified_union([value_type, NoneTyp()])\n elif len(ctx.arg_types) == 2 and len(ctx.arg_types[1]) == 1:\n return UnionType.make_simplified_union([value_type, ctx.arg_types[1][0]])\n else:\n ctx.api.msg.typeddict_item_name_not_found(ctx.type, key, ctx.context)\n return AnyType()\n return ctx.default_return_type\n\n\ndef int_pow_callback(ctx: MethodContext) -> Type:\n \"\"\"Infer a more precise return type for int.__pow__.\"\"\"\n if (len(ctx.arg_types) == 1\n and len(ctx.arg_types[0]) == 1):\n arg = ctx.args[0][0]\n if isinstance(arg, IntExpr):\n exponent = arg.value\n elif isinstance(arg, UnaryExpr) and arg.op == '-' and isinstance(arg.expr, IntExpr):\n exponent = -arg.expr.value\n else:\n # Right operand not an int literal or a negated literal -- give up.\n return ctx.default_return_type\n if exponent >= 0:\n return ctx.api.named_generic_type('builtins.int', [])\n else:\n return ctx.api.named_generic_type('builtins.float', [])\n return ctx.default_return_type\n", "path": "mypy/plugin.py"}]} | 3,847 | 120 |
gh_patches_debug_35767 | rasdani/github-patches | git_diff | microsoft__ptvsd-895 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Race condition in JsonMessageChannel
```py
def send_request(self, command, arguments=None):
d = {'command': command}
if arguments is not None:
d['arguments'] = arguments
seq = self._send_message('request', d)
request = Request(self, seq)
with self._lock:
self._requests[seq] = request
return request
```
Note that the message is sent before the requests dict is updated. If it goes fast enough, the response handler will receive a response to an "unknown" request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/messaging.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 from __future__ import print_function, with_statement, absolute_import
6
7 import collections
8 import itertools
9 import json
10 import sys
11 import threading
12
13
14 class JsonIOStream(object):
15 """Implements a JSON value stream over two byte streams (input and output).
16
17 Each value is encoded as a packet consisting of a header and a body, as defined by the
18 Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).
19 """
20
21 MAX_BODY_SIZE = 0xFFFFFF
22
23 @classmethod
24 def from_stdio(cls):
25 if sys.version_info >= (3,):
26 stdin = sys.stdin.buffer
27 stdout = sys.stdout.buffer
28 else:
29 stdin = sys.stdin
30 stdout = sys.stdout
31 if sys.platform == 'win32':
32 import os, msvcrt
33 msvcrt.setmode(stdin.fileno(), os.O_BINARY)
34 msvcrt.setmode(stdout.fileno(), os.O_BINARY)
35 return cls(stdin, stdout)
36
37 @classmethod
38 def from_socket(cls, socket):
39 if socket.gettimeout() is not None:
40 raise ValueError('Socket must be in blocking mode')
41 socket_io = socket.makefile('rwb', 0)
42 return cls(socket_io, socket_io)
43
44 def __init__(self, reader, writer):
45 """Creates a new JsonIOStream.
46
47 reader is a BytesIO-like object from which incoming messages are read;
48 reader.readline() must treat '\n' as the line terminator, and must leave
49 '\r' as is (i.e. it must not translate '\r\n' to just plain '\n'!).
50
51 writer is a BytesIO-like object to which outgoing messages are written.
52 """
53 self._reader = reader
54 self._writer = writer
55 self._is_closing = False
56
57 def close(self):
58 self._is_closing = True
59 self._reader.close()
60 self._writer.close()
61
62 def _read_line(self):
63 line = b''
64 while True:
65 line += self._reader.readline()
66 if not line:
67 raise EOFError
68 if line.endswith(b'\r\n'):
69 line = line[0:-2]
70 return line
71
72 def read_json(self):
73 """Read a single JSON value from reader.
74
75 Returns JSON value as parsed by json.loads(), or raises EOFError
76 if there are no more objects to be read.
77 """
78
79 headers = {}
80 while True:
81 try:
82 line = self._read_line()
83 except Exception:
84 if self._is_closing:
85 raise EOFError
86 else:
87 raise
88
89 if line == b'':
90 break
91 key, _, value = line.partition(b':')
92 headers[key] = value
93
94 try:
95 length = int(headers[b'Content-Length'])
96 if not (0 <= length <= self.MAX_BODY_SIZE):
97 raise ValueError
98 except (KeyError, ValueError):
99 raise IOError('Content-Length is missing or invalid')
100
101 try:
102 body = self._reader.read(length)
103 except Exception:
104 if self._is_closing:
105 raise EOFError
106 else:
107 raise
108
109 if isinstance(body, bytes):
110 body = body.decode('utf-8')
111 return json.loads(body)
112
113 def write_json(self, value):
114 """Write a single JSON object to writer.
115
116 object must be in the format suitable for json.dump().
117 """
118
119 body = json.dumps(value, sort_keys=True)
120 if not isinstance(body, bytes):
121 body = body.encode('utf-8')
122
123 header = 'Content-Length: %d\r\n\r\n' % len(body)
124 if not isinstance(header, bytes):
125 header = header.encode('ascii')
126
127 self._writer.write(header)
128 self._writer.write(body)
129
130
131 Response = collections.namedtuple('Response', ('success', 'command', 'error_message', 'body'))
132 Response.__new__.__defaults__ = (None, None)
133 class Response(Response):
134 """Represents a received response to a Request."""
135
136
137 class RequestFailure(Exception):
138 def __init__(self, message):
139 self.message = message
140
141
142 class Request(object):
143 """Represents a request that was sent to the other party, and is awaiting or has
144 already received a response.
145 """
146
147 def __init__(self, channel, seq):
148 self.channel = channel
149 self.seq = seq
150 self.response = None
151 self._lock = threading.Lock()
152 self._got_response = threading.Event()
153 self._callback = lambda _: None
154
155 def _handle_response(self, success, command, error_message=None, body=None):
156 assert self.response is None
157 with self._lock:
158 response = Response(success, command, error_message, body)
159 self.response = response
160 callback = self._callback
161 callback(response)
162 self._got_response.set()
163
164 def wait_for_response(self, raise_if_failed=True):
165 """Waits until a response is received for this request, records that
166 response as a new Response object accessible via self.response,
167 and returns self.response.body.
168
169 If raise_if_failed is True, and the received response does not indicate
170 success, raises RequestFailure. Otherwise, self.response.success has to
171 be inspected to determine whether the request failed or succeeded, since
172 self.response.body can be None in either case.
173 """
174
175 self._got_response.wait()
176 if raise_if_failed and not self.response.success:
177 raise RequestFailure(self.response.error_message)
178 return self.response
179
180 def on_response(self, callback):
181 """Registers a callback to invoke when a response is received for this
182 request. If response was already received, invokes callback immediately.
183 Callback is invoked with Response object as the sole argument.
184
185 The callback is invoked on an unspecified background thread that performs
186 processing of incoming messages; therefore, no further message processing
187 occurs until the callback returns.
188 """
189
190 with self._lock:
191 response = self.response
192 if response is None:
193 self._callback = callback
194 return
195 callback(response)
196
197
198 class JsonMessageChannel(object):
199 """Implements a JSON message channel on top of a JSON stream, with
200 support for generic Request, Response and Event messages as defined by the
201 Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).
202 """
203
204 def __init__(self, stream, handlers=None):
205 self.stream = stream
206 self.send_callback = lambda channel, message: None
207 self.receive_callback = lambda channel, message: None
208 self._lock = threading.Lock()
209 self._stop = threading.Event()
210 self._seq_iter = itertools.count(1)
211 self._requests = {}
212 self._handlers = handlers
213 self._worker = threading.Thread(target=self._process_incoming_messages)
214 self._worker.daemon = True
215
216 def close(self):
217 self.stream.close()
218
219 def start(self):
220 self._worker.start()
221
222 def wait(self):
223 self._worker.join()
224
225 def _send_message(self, type, rest={}):
226 with self._lock:
227 seq = next(self._seq_iter)
228 message = {
229 'seq': seq,
230 'type': type,
231 }
232 message.update(rest)
233 with self._lock:
234 self.stream.write_json(message)
235 self.send_callback(self, message)
236 return seq
237
238 def send_request(self, command, arguments=None):
239 d = {'command': command}
240 if arguments is not None:
241 d['arguments'] = arguments
242 seq = self._send_message('request', d)
243 request = Request(self, seq)
244 with self._lock:
245 self._requests[seq] = request
246 return request
247
248 def send_event(self, event, body=None):
249 d = {'event': event}
250 if body is not None:
251 d['body'] = body
252 self._send_message('event', d)
253
254 def send_response(self, request_seq, success, command, error_message=None, body=None):
255 d = {
256 'request_seq': request_seq,
257 'success': success,
258 'command': command,
259 }
260 if success:
261 if body is not None:
262 d['body'] = body
263 else:
264 if error_message is not None:
265 d['message'] = error_message
266 self._send_message('response', d)
267
268 def on_message(self, message):
269 self.receive_callback(self, message)
270 seq = message['seq']
271 typ = message['type']
272 if typ == 'request':
273 command = message['command']
274 arguments = message.get('arguments', None)
275 self.on_request(seq, command, arguments)
276 elif typ == 'event':
277 event = message['event']
278 body = message.get('body', None)
279 self.on_event(seq, event, body)
280 elif typ == 'response':
281 request_seq = message['request_seq']
282 success = message['success']
283 command = message['command']
284 error_message = message.get('message', None)
285 body = message.get('body', None)
286 self.on_response(seq, request_seq, success, command, error_message, body)
287 else:
288 raise IOError('Incoming message has invalid "type":\n%r' % message)
289
290 def on_request(self, seq, command, arguments):
291 handler_name = '%s_request' % command
292 specific_handler = getattr(self._handlers, handler_name, None)
293 if specific_handler is not None:
294 handler = lambda: specific_handler(self, arguments)
295 else:
296 generic_handler = getattr(self._handlers, 'request')
297 handler = lambda: generic_handler(self, command, arguments)
298 try:
299 response_body = handler()
300 except Exception as ex:
301 self.send_response(seq, False, command, str(ex))
302 else:
303 self.send_response(seq, True, command, None, response_body)
304
305 def on_event(self, seq, event, body):
306 handler_name = '%s_event' % event
307 specific_handler = getattr(self._handlers, handler_name, None)
308 if specific_handler is not None:
309 handler = lambda: specific_handler(self, body)
310 else:
311 generic_handler = getattr(self._handlers, 'event')
312 handler = lambda: generic_handler(self, event, body)
313 handler()
314
315 def on_response(self, seq, request_seq, success, command, error_message, body):
316 try:
317 with self._lock:
318 request = self._requests.pop(request_seq)
319 except KeyError:
320 raise KeyError('Received response to unknown request %d', request_seq)
321 return request._handle_response(success, command, error_message, body)
322
323 def _process_incoming_messages(self):
324 while True:
325 try:
326 message = self.stream.read_json()
327 except EOFError:
328 break
329 try:
330 self.on_message(message)
331 except Exception:
332 print('Error while processing message:\n%r\n\n' % message, file=sys.__stderr__)
333 raise
334
335
336 class MessageHandlers(object):
337 """A simple delegating message handlers object for use with JsonMessageChannel.
338 For every argument provided, the object has an attribute with the corresponding
339 name and value. Example:
340 """
341
342 def __init__(self, **kwargs):
343 for name, func in kwargs.items():
344 setattr(self, name, func)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/messaging.py b/ptvsd/messaging.py
--- a/ptvsd/messaging.py
+++ b/ptvsd/messaging.py
@@ -5,6 +5,7 @@
from __future__ import print_function, with_statement, absolute_import
import collections
+import contextlib
import itertools
import json
import sys
@@ -222,6 +223,7 @@
def wait(self):
self._worker.join()
+ @contextlib.contextmanager
def _send_message(self, type, rest={}):
with self._lock:
seq = next(self._seq_iter)
@@ -231,17 +233,16 @@
}
message.update(rest)
with self._lock:
+ yield seq
self.stream.write_json(message)
self.send_callback(self, message)
- return seq
def send_request(self, command, arguments=None):
d = {'command': command}
if arguments is not None:
d['arguments'] = arguments
- seq = self._send_message('request', d)
- request = Request(self, seq)
- with self._lock:
+ with self._send_message('request', d) as seq:
+ request = Request(self, seq)
self._requests[seq] = request
return request
@@ -249,7 +250,8 @@
d = {'event': event}
if body is not None:
d['body'] = body
- self._send_message('event', d)
+ with self._send_message('event', d):
+ pass
def send_response(self, request_seq, success, command, error_message=None, body=None):
d = {
@@ -263,7 +265,8 @@
else:
if error_message is not None:
d['message'] = error_message
- self._send_message('response', d)
+ with self._send_message('response', d):
+ pass
def on_message(self, message):
self.receive_callback(self, message)
| {"golden_diff": "diff --git a/ptvsd/messaging.py b/ptvsd/messaging.py\n--- a/ptvsd/messaging.py\n+++ b/ptvsd/messaging.py\n@@ -5,6 +5,7 @@\n from __future__ import print_function, with_statement, absolute_import\n \n import collections\n+import contextlib\n import itertools\n import json\n import sys\n@@ -222,6 +223,7 @@\n def wait(self):\n self._worker.join()\n \n+ @contextlib.contextmanager\n def _send_message(self, type, rest={}):\n with self._lock:\n seq = next(self._seq_iter)\n@@ -231,17 +233,16 @@\n }\n message.update(rest)\n with self._lock:\n+ yield seq\n self.stream.write_json(message)\n self.send_callback(self, message)\n- return seq\n \n def send_request(self, command, arguments=None):\n d = {'command': command}\n if arguments is not None:\n d['arguments'] = arguments\n- seq = self._send_message('request', d)\n- request = Request(self, seq)\n- with self._lock:\n+ with self._send_message('request', d) as seq:\n+ request = Request(self, seq)\n self._requests[seq] = request\n return request\n \n@@ -249,7 +250,8 @@\n d = {'event': event}\n if body is not None:\n d['body'] = body\n- self._send_message('event', d)\n+ with self._send_message('event', d):\n+ pass\n \n def send_response(self, request_seq, success, command, error_message=None, body=None):\n d = {\n@@ -263,7 +265,8 @@\n else:\n if error_message is not None:\n d['message'] = error_message\n- self._send_message('response', d)\n+ with self._send_message('response', d):\n+ pass\n \n def on_message(self, message):\n self.receive_callback(self, message)\n", "issue": "Race condition in JsonMessageChannel\n```py\r\n def send_request(self, command, arguments=None):\r\n d = {'command': command}\r\n if arguments is not None:\r\n d['arguments'] = arguments\r\n seq = self._send_message('request', d)\r\n request = Request(self, seq)\r\n with self._lock:\r\n self._requests[seq] = request\r\n return request\r\n```\r\nNote that the message is sent before the requests dict is updated. If it goes fast enough, the response handler will receive a response to an \"unknown\" request.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nimport collections\nimport itertools\nimport json\nimport sys\nimport threading\n\n\nclass JsonIOStream(object):\n \"\"\"Implements a JSON value stream over two byte streams (input and output).\n\n Each value is encoded as a packet consisting of a header and a body, as defined by the\n Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).\n \"\"\"\n\n MAX_BODY_SIZE = 0xFFFFFF\n\n @classmethod\n def from_stdio(cls):\n if sys.version_info >= (3,):\n stdin = sys.stdin.buffer\n stdout = sys.stdout.buffer\n else:\n stdin = sys.stdin\n stdout = sys.stdout\n if sys.platform == 'win32':\n import os, msvcrt\n msvcrt.setmode(stdin.fileno(), os.O_BINARY)\n msvcrt.setmode(stdout.fileno(), os.O_BINARY)\n return cls(stdin, stdout)\n\n @classmethod\n def from_socket(cls, socket):\n if socket.gettimeout() is not None:\n raise ValueError('Socket must be in blocking mode')\n socket_io = socket.makefile('rwb', 0)\n return cls(socket_io, socket_io)\n\n def __init__(self, reader, writer):\n \"\"\"Creates a new JsonIOStream.\n\n reader is a BytesIO-like object from which incoming messages are read;\n reader.readline() must treat '\\n' as the line terminator, and must leave\n '\\r' as is (i.e. it must not translate '\\r\\n' to just plain '\\n'!).\n\n writer is a BytesIO-like object to which outgoing messages are written.\n \"\"\"\n self._reader = reader\n self._writer = writer\n self._is_closing = False\n\n def close(self):\n self._is_closing = True\n self._reader.close()\n self._writer.close()\n\n def _read_line(self):\n line = b''\n while True:\n line += self._reader.readline()\n if not line:\n raise EOFError\n if line.endswith(b'\\r\\n'):\n line = line[0:-2]\n return line\n\n def read_json(self):\n \"\"\"Read a single JSON value from reader.\n\n Returns JSON value as parsed by json.loads(), or raises EOFError\n if there are no more objects to be read.\n \"\"\"\n\n headers = {}\n while True:\n try:\n line = self._read_line()\n except Exception:\n if self._is_closing:\n raise EOFError\n else:\n raise\n\n if line == b'':\n break\n key, _, value = line.partition(b':')\n headers[key] = value\n\n try:\n length = int(headers[b'Content-Length'])\n if not (0 <= length <= self.MAX_BODY_SIZE):\n raise ValueError\n except (KeyError, ValueError):\n raise IOError('Content-Length is missing or invalid')\n\n try:\n body = self._reader.read(length)\n except Exception:\n if self._is_closing:\n raise EOFError\n else:\n raise\n\n if isinstance(body, bytes):\n body = body.decode('utf-8')\n return json.loads(body)\n\n def write_json(self, value):\n \"\"\"Write a single JSON object to writer.\n\n object must be in the format suitable for json.dump().\n \"\"\"\n\n body = json.dumps(value, sort_keys=True)\n if not isinstance(body, bytes):\n body = body.encode('utf-8')\n\n header = 'Content-Length: %d\\r\\n\\r\\n' % len(body)\n if not isinstance(header, bytes):\n header = header.encode('ascii')\n\n self._writer.write(header)\n self._writer.write(body)\n\n\nResponse = collections.namedtuple('Response', ('success', 'command', 'error_message', 'body'))\nResponse.__new__.__defaults__ = (None, None)\nclass Response(Response):\n \"\"\"Represents a received response to a Request.\"\"\"\n\n\nclass RequestFailure(Exception):\n def __init__(self, message):\n self.message = message\n\n\nclass Request(object):\n \"\"\"Represents a request that was sent to the other party, and is awaiting or has\n already received a response.\n \"\"\"\n\n def __init__(self, channel, seq):\n self.channel = channel\n self.seq = seq\n self.response = None\n self._lock = threading.Lock()\n self._got_response = threading.Event()\n self._callback = lambda _: None\n\n def _handle_response(self, success, command, error_message=None, body=None):\n assert self.response is None\n with self._lock:\n response = Response(success, command, error_message, body)\n self.response = response\n callback = self._callback\n callback(response)\n self._got_response.set()\n\n def wait_for_response(self, raise_if_failed=True):\n \"\"\"Waits until a response is received for this request, records that\n response as a new Response object accessible via self.response,\n and returns self.response.body.\n\n If raise_if_failed is True, and the received response does not indicate\n success, raises RequestFailure. Otherwise, self.response.success has to\n be inspected to determine whether the request failed or succeeded, since\n self.response.body can be None in either case.\n \"\"\"\n\n self._got_response.wait()\n if raise_if_failed and not self.response.success:\n raise RequestFailure(self.response.error_message)\n return self.response\n\n def on_response(self, callback):\n \"\"\"Registers a callback to invoke when a response is received for this\n request. If response was already received, invokes callback immediately.\n Callback is invoked with Response object as the sole argument.\n\n The callback is invoked on an unspecified background thread that performs\n processing of incoming messages; therefore, no further message processing\n occurs until the callback returns.\n \"\"\"\n\n with self._lock:\n response = self.response\n if response is None:\n self._callback = callback\n return\n callback(response)\n\n\nclass JsonMessageChannel(object):\n \"\"\"Implements a JSON message channel on top of a JSON stream, with\n support for generic Request, Response and Event messages as defined by the\n Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).\n \"\"\"\n\n def __init__(self, stream, handlers=None):\n self.stream = stream\n self.send_callback = lambda channel, message: None\n self.receive_callback = lambda channel, message: None\n self._lock = threading.Lock()\n self._stop = threading.Event()\n self._seq_iter = itertools.count(1)\n self._requests = {}\n self._handlers = handlers\n self._worker = threading.Thread(target=self._process_incoming_messages)\n self._worker.daemon = True\n\n def close(self):\n self.stream.close()\n\n def start(self):\n self._worker.start()\n\n def wait(self):\n self._worker.join()\n\n def _send_message(self, type, rest={}):\n with self._lock:\n seq = next(self._seq_iter)\n message = {\n 'seq': seq,\n 'type': type,\n }\n message.update(rest)\n with self._lock:\n self.stream.write_json(message)\n self.send_callback(self, message)\n return seq\n\n def send_request(self, command, arguments=None):\n d = {'command': command}\n if arguments is not None:\n d['arguments'] = arguments\n seq = self._send_message('request', d)\n request = Request(self, seq)\n with self._lock:\n self._requests[seq] = request\n return request\n\n def send_event(self, event, body=None):\n d = {'event': event}\n if body is not None:\n d['body'] = body\n self._send_message('event', d)\n\n def send_response(self, request_seq, success, command, error_message=None, body=None):\n d = {\n 'request_seq': request_seq,\n 'success': success,\n 'command': command,\n }\n if success:\n if body is not None:\n d['body'] = body\n else:\n if error_message is not None:\n d['message'] = error_message\n self._send_message('response', d)\n\n def on_message(self, message):\n self.receive_callback(self, message)\n seq = message['seq']\n typ = message['type']\n if typ == 'request':\n command = message['command']\n arguments = message.get('arguments', None)\n self.on_request(seq, command, arguments)\n elif typ == 'event':\n event = message['event']\n body = message.get('body', None)\n self.on_event(seq, event, body)\n elif typ == 'response':\n request_seq = message['request_seq']\n success = message['success']\n command = message['command']\n error_message = message.get('message', None)\n body = message.get('body', None)\n self.on_response(seq, request_seq, success, command, error_message, body)\n else:\n raise IOError('Incoming message has invalid \"type\":\\n%r' % message)\n\n def on_request(self, seq, command, arguments):\n handler_name = '%s_request' % command\n specific_handler = getattr(self._handlers, handler_name, None)\n if specific_handler is not None:\n handler = lambda: specific_handler(self, arguments)\n else:\n generic_handler = getattr(self._handlers, 'request')\n handler = lambda: generic_handler(self, command, arguments)\n try:\n response_body = handler()\n except Exception as ex:\n self.send_response(seq, False, command, str(ex))\n else:\n self.send_response(seq, True, command, None, response_body)\n\n def on_event(self, seq, event, body):\n handler_name = '%s_event' % event\n specific_handler = getattr(self._handlers, handler_name, None)\n if specific_handler is not None:\n handler = lambda: specific_handler(self, body)\n else:\n generic_handler = getattr(self._handlers, 'event')\n handler = lambda: generic_handler(self, event, body)\n handler()\n\n def on_response(self, seq, request_seq, success, command, error_message, body):\n try:\n with self._lock:\n request = self._requests.pop(request_seq)\n except KeyError:\n raise KeyError('Received response to unknown request %d', request_seq)\n return request._handle_response(success, command, error_message, body)\n\n def _process_incoming_messages(self):\n while True:\n try:\n message = self.stream.read_json()\n except EOFError:\n break\n try:\n self.on_message(message)\n except Exception:\n print('Error while processing message:\\n%r\\n\\n' % message, file=sys.__stderr__)\n raise\n\n\nclass MessageHandlers(object):\n \"\"\"A simple delegating message handlers object for use with JsonMessageChannel.\n For every argument provided, the object has an attribute with the corresponding\n name and value. Example:\n \"\"\"\n\n def __init__(self, **kwargs):\n for name, func in kwargs.items():\n setattr(self, name, func)", "path": "ptvsd/messaging.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nfrom __future__ import print_function, with_statement, absolute_import\n\nimport collections\nimport contextlib\nimport itertools\nimport json\nimport sys\nimport threading\n\n\nclass JsonIOStream(object):\n \"\"\"Implements a JSON value stream over two byte streams (input and output).\n\n Each value is encoded as a packet consisting of a header and a body, as defined by the\n Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).\n \"\"\"\n\n MAX_BODY_SIZE = 0xFFFFFF\n\n @classmethod\n def from_stdio(cls):\n if sys.version_info >= (3,):\n stdin = sys.stdin.buffer\n stdout = sys.stdout.buffer\n else:\n stdin = sys.stdin\n stdout = sys.stdout\n if sys.platform == 'win32':\n import os, msvcrt\n msvcrt.setmode(stdin.fileno(), os.O_BINARY)\n msvcrt.setmode(stdout.fileno(), os.O_BINARY)\n return cls(stdin, stdout)\n\n @classmethod\n def from_socket(cls, socket):\n if socket.gettimeout() is not None:\n raise ValueError('Socket must be in blocking mode')\n socket_io = socket.makefile('rwb', 0)\n return cls(socket_io, socket_io)\n\n def __init__(self, reader, writer):\n \"\"\"Creates a new JsonIOStream.\n\n reader is a BytesIO-like object from which incoming messages are read;\n reader.readline() must treat '\\n' as the line terminator, and must leave\n '\\r' as is (i.e. it must not translate '\\r\\n' to just plain '\\n'!).\n\n writer is a BytesIO-like object to which outgoing messages are written.\n \"\"\"\n self._reader = reader\n self._writer = writer\n self._is_closing = False\n\n def close(self):\n self._is_closing = True\n self._reader.close()\n self._writer.close()\n\n def _read_line(self):\n line = b''\n while True:\n line += self._reader.readline()\n if not line:\n raise EOFError\n if line.endswith(b'\\r\\n'):\n line = line[0:-2]\n return line\n\n def read_json(self):\n \"\"\"Read a single JSON value from reader.\n\n Returns JSON value as parsed by json.loads(), or raises EOFError\n if there are no more objects to be read.\n \"\"\"\n\n headers = {}\n while True:\n try:\n line = self._read_line()\n except Exception:\n if self._is_closing:\n raise EOFError\n else:\n raise\n\n if line == b'':\n break\n key, _, value = line.partition(b':')\n headers[key] = value\n\n try:\n length = int(headers[b'Content-Length'])\n if not (0 <= length <= self.MAX_BODY_SIZE):\n raise ValueError\n except (KeyError, ValueError):\n raise IOError('Content-Length is missing or invalid')\n\n try:\n body = self._reader.read(length)\n except Exception:\n if self._is_closing:\n raise EOFError\n else:\n raise\n\n if isinstance(body, bytes):\n body = body.decode('utf-8')\n return json.loads(body)\n\n def write_json(self, value):\n \"\"\"Write a single JSON object to writer.\n\n object must be in the format suitable for json.dump().\n \"\"\"\n\n body = json.dumps(value, sort_keys=True)\n if not isinstance(body, bytes):\n body = body.encode('utf-8')\n\n header = 'Content-Length: %d\\r\\n\\r\\n' % len(body)\n if not isinstance(header, bytes):\n header = header.encode('ascii')\n\n self._writer.write(header)\n self._writer.write(body)\n\n\nResponse = collections.namedtuple('Response', ('success', 'command', 'error_message', 'body'))\nResponse.__new__.__defaults__ = (None, None)\nclass Response(Response):\n \"\"\"Represents a received response to a Request.\"\"\"\n\n\nclass RequestFailure(Exception):\n def __init__(self, message):\n self.message = message\n\n\nclass Request(object):\n \"\"\"Represents a request that was sent to the other party, and is awaiting or has\n already received a response.\n \"\"\"\n\n def __init__(self, channel, seq):\n self.channel = channel\n self.seq = seq\n self.response = None\n self._lock = threading.Lock()\n self._got_response = threading.Event()\n self._callback = lambda _: None\n\n def _handle_response(self, success, command, error_message=None, body=None):\n assert self.response is None\n with self._lock:\n response = Response(success, command, error_message, body)\n self.response = response\n callback = self._callback\n callback(response)\n self._got_response.set()\n\n def wait_for_response(self, raise_if_failed=True):\n \"\"\"Waits until a response is received for this request, records that\n response as a new Response object accessible via self.response,\n and returns self.response.body.\n\n If raise_if_failed is True, and the received response does not indicate\n success, raises RequestFailure. Otherwise, self.response.success has to\n be inspected to determine whether the request failed or succeeded, since\n self.response.body can be None in either case.\n \"\"\"\n\n self._got_response.wait()\n if raise_if_failed and not self.response.success:\n raise RequestFailure(self.response.error_message)\n return self.response\n\n def on_response(self, callback):\n \"\"\"Registers a callback to invoke when a response is received for this\n request. If response was already received, invokes callback immediately.\n Callback is invoked with Response object as the sole argument.\n\n The callback is invoked on an unspecified background thread that performs\n processing of incoming messages; therefore, no further message processing\n occurs until the callback returns.\n \"\"\"\n\n with self._lock:\n response = self.response\n if response is None:\n self._callback = callback\n return\n callback(response)\n\n\nclass JsonMessageChannel(object):\n \"\"\"Implements a JSON message channel on top of a JSON stream, with\n support for generic Request, Response and Event messages as defined by the\n Debug Adapter Protocol (https://microsoft.github.io/debug-adapter-protocol/overview).\n \"\"\"\n\n def __init__(self, stream, handlers=None):\n self.stream = stream\n self.send_callback = lambda channel, message: None\n self.receive_callback = lambda channel, message: None\n self._lock = threading.Lock()\n self._stop = threading.Event()\n self._seq_iter = itertools.count(1)\n self._requests = {}\n self._handlers = handlers\n self._worker = threading.Thread(target=self._process_incoming_messages)\n self._worker.daemon = True\n\n def close(self):\n self.stream.close()\n\n def start(self):\n self._worker.start()\n\n def wait(self):\n self._worker.join()\n\n @contextlib.contextmanager\n def _send_message(self, type, rest={}):\n with self._lock:\n seq = next(self._seq_iter)\n message = {\n 'seq': seq,\n 'type': type,\n }\n message.update(rest)\n with self._lock:\n yield seq\n self.stream.write_json(message)\n self.send_callback(self, message)\n\n def send_request(self, command, arguments=None):\n d = {'command': command}\n if arguments is not None:\n d['arguments'] = arguments\n with self._send_message('request', d) as seq:\n request = Request(self, seq)\n self._requests[seq] = request\n return request\n\n def send_event(self, event, body=None):\n d = {'event': event}\n if body is not None:\n d['body'] = body\n with self._send_message('event', d):\n pass\n\n def send_response(self, request_seq, success, command, error_message=None, body=None):\n d = {\n 'request_seq': request_seq,\n 'success': success,\n 'command': command,\n }\n if success:\n if body is not None:\n d['body'] = body\n else:\n if error_message is not None:\n d['message'] = error_message\n with self._send_message('response', d):\n pass\n\n def on_message(self, message):\n self.receive_callback(self, message)\n seq = message['seq']\n typ = message['type']\n if typ == 'request':\n command = message['command']\n arguments = message.get('arguments', None)\n self.on_request(seq, command, arguments)\n elif typ == 'event':\n event = message['event']\n body = message.get('body', None)\n self.on_event(seq, event, body)\n elif typ == 'response':\n request_seq = message['request_seq']\n success = message['success']\n command = message['command']\n error_message = message.get('message', None)\n body = message.get('body', None)\n self.on_response(seq, request_seq, success, command, error_message, body)\n else:\n raise IOError('Incoming message has invalid \"type\":\\n%r' % message)\n\n def on_request(self, seq, command, arguments):\n handler_name = '%s_request' % command\n specific_handler = getattr(self._handlers, handler_name, None)\n if specific_handler is not None:\n handler = lambda: specific_handler(self, arguments)\n else:\n generic_handler = getattr(self._handlers, 'request')\n handler = lambda: generic_handler(self, command, arguments)\n try:\n response_body = handler()\n except Exception as ex:\n self.send_response(seq, False, command, str(ex))\n else:\n self.send_response(seq, True, command, None, response_body)\n\n def on_event(self, seq, event, body):\n handler_name = '%s_event' % event\n specific_handler = getattr(self._handlers, handler_name, None)\n if specific_handler is not None:\n handler = lambda: specific_handler(self, body)\n else:\n generic_handler = getattr(self._handlers, 'event')\n handler = lambda: generic_handler(self, event, body)\n handler()\n\n def on_response(self, seq, request_seq, success, command, error_message, body):\n try:\n with self._lock:\n request = self._requests.pop(request_seq)\n except KeyError:\n raise KeyError('Received response to unknown request %d', request_seq)\n return request._handle_response(success, command, error_message, body)\n\n def _process_incoming_messages(self):\n while True:\n try:\n message = self.stream.read_json()\n except EOFError:\n break\n try:\n self.on_message(message)\n except Exception:\n print('Error while processing message:\\n%r\\n\\n' % message, file=sys.__stderr__)\n raise\n\n\nclass MessageHandlers(object):\n \"\"\"A simple delegating message handlers object for use with JsonMessageChannel.\n For every argument provided, the object has an attribute with the corresponding\n name and value. Example:\n \"\"\"\n\n def __init__(self, **kwargs):\n for name, func in kwargs.items():\n setattr(self, name, func)", "path": "ptvsd/messaging.py"}]} | 3,763 | 462 |
gh_patches_debug_13238 | rasdani/github-patches | git_diff | mindsdb__mindsdb-2007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Response contains 'nan' instead of `null`
if do
```
select null, null, null from information_schema.tables limit 1;
```
then response will be:
```
+------+--------+--------+
| None | None_2 | None_3 |
+------+--------+--------+
| nan | nan | nan |
+------+--------+--------+
```
row values must be `null`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/api/mysql/mysql_proxy/utilities/sql.py`
Content:
```
1 import duckdb
2 import pandas as pd
3 from mindsdb_sql import parse_sql
4 from mindsdb_sql.parser.ast import Select, Identifier, BinaryOperation, OrderBy
5 from mindsdb_sql.render.sqlalchemy_render import SqlalchemyRender
6
7 from mindsdb.utilities.log import log
8
9
10 def _remove_table_name(root):
11 if isinstance(root, BinaryOperation):
12 _remove_table_name(root.args[0])
13 _remove_table_name(root.args[1])
14 elif isinstance(root, Identifier):
15 root.parts = [root.parts[-1]]
16
17
18 def query_df(df, query):
19 """ Perform simple query ('select' from one table, without subqueries and joins) on DataFrame.
20
21 Args:
22 df (pandas.DataFrame): data
23 query (mindsdb_sql.parser.ast.Select | str): select query
24
25 Returns:
26 pandas.DataFrame
27 """
28
29 if isinstance(query, str):
30 query_ast = parse_sql(query, dialect='mysql')
31 else:
32 query_ast = query
33
34 if isinstance(query_ast, Select) is False or isinstance(query_ast.from_table, Identifier) is False:
35 raise Exception("Only 'SELECT from TABLE' statements supported for internal query")
36
37 query_ast.from_table.parts = ['df_table']
38 for identifier in query_ast.targets:
39 if isinstance(identifier, Identifier):
40 identifier.parts = [identifier.parts[-1]]
41 if isinstance(query_ast.order_by, list):
42 for orderby in query_ast.order_by:
43 if isinstance(orderby, OrderBy) and isinstance(orderby.field, Identifier):
44 orderby.field.parts = [orderby.field.parts[-1]]
45 _remove_table_name(query_ast.where)
46
47 render = SqlalchemyRender('postgres')
48 try:
49 query_str = render.get_string(query_ast, with_failback=False)
50 except Exception as e:
51 log.error(f"Exception during query casting to 'postgres' dialect. Query: {str(query)}. Error: {e}")
52 query_str = render.get_string(query_ast, with_failback=True)
53
54 res = duckdb.query_df(df, 'df_table', query_str)
55 result_df = res.df()
56 result_df = result_df.where(pd.notnull(result_df), None)
57 return result_df
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mindsdb/api/mysql/mysql_proxy/utilities/sql.py b/mindsdb/api/mysql/mysql_proxy/utilities/sql.py
--- a/mindsdb/api/mysql/mysql_proxy/utilities/sql.py
+++ b/mindsdb/api/mysql/mysql_proxy/utilities/sql.py
@@ -1,5 +1,5 @@
import duckdb
-import pandas as pd
+import numpy as np
from mindsdb_sql import parse_sql
from mindsdb_sql.parser.ast import Select, Identifier, BinaryOperation, OrderBy
from mindsdb_sql.render.sqlalchemy_render import SqlalchemyRender
@@ -53,5 +53,5 @@
res = duckdb.query_df(df, 'df_table', query_str)
result_df = res.df()
- result_df = result_df.where(pd.notnull(result_df), None)
+ result_df = result_df.replace({np.nan: None})
return result_df
| {"golden_diff": "diff --git a/mindsdb/api/mysql/mysql_proxy/utilities/sql.py b/mindsdb/api/mysql/mysql_proxy/utilities/sql.py\n--- a/mindsdb/api/mysql/mysql_proxy/utilities/sql.py\n+++ b/mindsdb/api/mysql/mysql_proxy/utilities/sql.py\n@@ -1,5 +1,5 @@\n import duckdb\n-import pandas as pd\n+import numpy as np\n from mindsdb_sql import parse_sql\n from mindsdb_sql.parser.ast import Select, Identifier, BinaryOperation, OrderBy\n from mindsdb_sql.render.sqlalchemy_render import SqlalchemyRender\n@@ -53,5 +53,5 @@\n \n res = duckdb.query_df(df, 'df_table', query_str)\n result_df = res.df()\n- result_df = result_df.where(pd.notnull(result_df), None)\n+ result_df = result_df.replace({np.nan: None})\n return result_df\n", "issue": "Response contains 'nan' instead of `null`\nif do \r\n```\r\nselect null, null, null from information_schema.tables limit 1;\r\n```\r\nthen response will be:\r\n```\r\n+------+--------+--------+\r\n| None | None_2 | None_3 |\r\n+------+--------+--------+\r\n| nan | nan | nan |\r\n+------+--------+--------+\r\n```\r\nrow values must be `null`\r\n\n", "before_files": [{"content": "import duckdb\nimport pandas as pd\nfrom mindsdb_sql import parse_sql\nfrom mindsdb_sql.parser.ast import Select, Identifier, BinaryOperation, OrderBy\nfrom mindsdb_sql.render.sqlalchemy_render import SqlalchemyRender\n\nfrom mindsdb.utilities.log import log\n\n\ndef _remove_table_name(root):\n if isinstance(root, BinaryOperation):\n _remove_table_name(root.args[0])\n _remove_table_name(root.args[1])\n elif isinstance(root, Identifier):\n root.parts = [root.parts[-1]]\n\n\ndef query_df(df, query):\n \"\"\" Perform simple query ('select' from one table, without subqueries and joins) on DataFrame.\n\n Args:\n df (pandas.DataFrame): data\n query (mindsdb_sql.parser.ast.Select | str): select query\n\n Returns:\n pandas.DataFrame\n \"\"\"\n\n if isinstance(query, str):\n query_ast = parse_sql(query, dialect='mysql')\n else:\n query_ast = query\n\n if isinstance(query_ast, Select) is False or isinstance(query_ast.from_table, Identifier) is False:\n raise Exception(\"Only 'SELECT from TABLE' statements supported for internal query\")\n\n query_ast.from_table.parts = ['df_table']\n for identifier in query_ast.targets:\n if isinstance(identifier, Identifier):\n identifier.parts = [identifier.parts[-1]]\n if isinstance(query_ast.order_by, list):\n for orderby in query_ast.order_by:\n if isinstance(orderby, OrderBy) and isinstance(orderby.field, Identifier):\n orderby.field.parts = [orderby.field.parts[-1]]\n _remove_table_name(query_ast.where)\n\n render = SqlalchemyRender('postgres')\n try:\n query_str = render.get_string(query_ast, with_failback=False)\n except Exception as e:\n log.error(f\"Exception during query casting to 'postgres' dialect. Query: {str(query)}. Error: {e}\")\n query_str = render.get_string(query_ast, with_failback=True)\n\n res = duckdb.query_df(df, 'df_table', query_str)\n result_df = res.df()\n result_df = result_df.where(pd.notnull(result_df), None)\n return result_df\n", "path": "mindsdb/api/mysql/mysql_proxy/utilities/sql.py"}], "after_files": [{"content": "import duckdb\nimport numpy as np\nfrom mindsdb_sql import parse_sql\nfrom mindsdb_sql.parser.ast import Select, Identifier, BinaryOperation, OrderBy\nfrom mindsdb_sql.render.sqlalchemy_render import SqlalchemyRender\n\nfrom mindsdb.utilities.log import log\n\n\ndef _remove_table_name(root):\n if isinstance(root, BinaryOperation):\n _remove_table_name(root.args[0])\n _remove_table_name(root.args[1])\n elif isinstance(root, Identifier):\n root.parts = [root.parts[-1]]\n\n\ndef query_df(df, query):\n \"\"\" Perform simple query ('select' from one table, without subqueries and joins) on DataFrame.\n\n Args:\n df (pandas.DataFrame): data\n query (mindsdb_sql.parser.ast.Select | str): select query\n\n Returns:\n pandas.DataFrame\n \"\"\"\n\n if isinstance(query, str):\n query_ast = parse_sql(query, dialect='mysql')\n else:\n query_ast = query\n\n if isinstance(query_ast, Select) is False or isinstance(query_ast.from_table, Identifier) is False:\n raise Exception(\"Only 'SELECT from TABLE' statements supported for internal query\")\n\n query_ast.from_table.parts = ['df_table']\n for identifier in query_ast.targets:\n if isinstance(identifier, Identifier):\n identifier.parts = [identifier.parts[-1]]\n if isinstance(query_ast.order_by, list):\n for orderby in query_ast.order_by:\n if isinstance(orderby, OrderBy) and isinstance(orderby.field, Identifier):\n orderby.field.parts = [orderby.field.parts[-1]]\n _remove_table_name(query_ast.where)\n\n render = SqlalchemyRender('postgres')\n try:\n query_str = render.get_string(query_ast, with_failback=False)\n except Exception as e:\n log.error(f\"Exception during query casting to 'postgres' dialect. Query: {str(query)}. Error: {e}\")\n query_str = render.get_string(query_ast, with_failback=True)\n\n res = duckdb.query_df(df, 'df_table', query_str)\n result_df = res.df()\n result_df = result_df.replace({np.nan: None})\n return result_df\n", "path": "mindsdb/api/mysql/mysql_proxy/utilities/sql.py"}]} | 923 | 190 |
gh_patches_debug_28834 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1837 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: ART Trier Germany collecting no more Data
### I Have A Problem With:
A specific source
### What's Your Problem
ART Trier Germany collecting no more Data. It worked till yesterday. I think they have a new homepage.
The Calender is now empty, only one Entry on February 26th: A.R.T. Wichtiger Hinweis!
The link (https://www.art-trier.de/cms/abfuhrtermine-1002.html) in the Description for ART Trier doesn't work anymore. Get a 404 Error Page.
Ver. 1.45.1
### Source (if relevant)
art_trier_de
### Logs
```Shell
no relevant logs
```
### Relevant Configuration
```YAML
- name: art_trier_de
args:
district: "Fellerich"
zip_code: "54456"
```
### Checklist Source Error
- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [ ] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py`
Content:
```
1 import contextlib
2 from datetime import datetime
3 from typing import Optional
4 from urllib.parse import quote
5
6 import requests
7 from waste_collection_schedule import Collection # type: ignore[attr-defined]
8 from waste_collection_schedule.service.ICS import ICS
9
10 TITLE = "ART Trier"
11 DESCRIPTION = "Source for waste collection of ART Trier."
12 URL = "https://www.art-trier.de"
13 TEST_CASES = {
14 "Trier": {
15 "zip_code": "54296",
16 "district": "Stadt Trier, Universitätsring",
17 }, # # https://www.art-trier.de/ics-feed/54296_trier_universitaetsring_1-1800.ics
18 "Schweich": {
19 "zip_code": "54338",
20 "district": "Schweich (inkl. Issel)",
21 }, # https://www.art-trier.de/ics-feed/54338_schweich_inkl_issel_1-1800.ics
22 "Dreis": {
23 "zip_code": "54518",
24 "district": "Dreis",
25 }, # https://www.art-trier.de/ics-feed/54518_dreis_1-1800.ics
26 "Wittlich Marktplatz": {
27 "zip_code": "54516",
28 "district": "Wittlich, Marktplatz",
29 }, # https://www.art-trier.de/ics-feed/54516_wittlich_marktplatz_1-1800.ics
30 "Wittlich Wengerohr": {
31 "zip_code": "54516",
32 "district": "Wittlich-Wengerohr",
33 }, # https://www.art-trier.de/ics-feed/54516_wittlich%2Dwengerohr_1-1800.ics
34 }
35
36 API_URL = "https://www.art-trier.de/ics-feed"
37 REMINDER_DAY = (
38 "0" # The calendar event should be on the same day as the waste collection
39 )
40 REMINDER_TIME = "0600" # The calendar event should start on any hour of the correct day, so this does not matter much
41 ICON_MAP = {
42 "Altpapier": "mdi:package-variant",
43 "Restmüll": "mdi:trash-can",
44 "Gelber Sack": "mdi:recycle",
45 }
46 SPECIAL_CHARS = str.maketrans(
47 {
48 " ": "_",
49 "ä": "ae",
50 "ü": "ue",
51 "ö": "oe",
52 "ß": "ss",
53 "(": None,
54 ")": None,
55 ",": None,
56 ".": None,
57 }
58 )
59
60
61 class Source:
62 def __init__(self, district: str, zip_code: str):
63 self._district = quote(
64 district.lower().removeprefix("stadt ").translate(SPECIAL_CHARS).strip()
65 )
66 self._zip_code = zip_code
67 self._ics = ICS(regex=r"^A.R.T. Abfuhrtermin: (.*)", split_at=r" & ")
68
69 def fetch(self):
70 url = f"{API_URL}/{self._zip_code}_{self._district}_{REMINDER_DAY}-{REMINDER_TIME}.ics"
71
72 res = requests.get(url)
73 res.raise_for_status()
74
75 schedule = self._ics.convert(res.text)
76
77 return [
78 Collection(date=entry[0], t=entry[1], icon=ICON_MAP.get(entry[1]))
79 for entry in schedule
80 ]
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py
@@ -1,13 +1,11 @@
-import contextlib
-from datetime import datetime
-from typing import Optional
+import logging
from urllib.parse import quote
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.ICS import ICS
-TITLE = "ART Trier"
+TITLE = "ART Trier (Depreciated)"
DESCRIPTION = "Source for waste collection of ART Trier."
URL = "https://www.art-trier.de"
TEST_CASES = {
@@ -56,6 +54,7 @@
".": None,
}
)
+LOGGER = logging.getLogger(__name__)
class Source:
@@ -67,7 +66,11 @@
self._ics = ICS(regex=r"^A.R.T. Abfuhrtermin: (.*)", split_at=r" & ")
def fetch(self):
- url = f"{API_URL}/{self._zip_code}_{self._district}_{REMINDER_DAY}-{REMINDER_TIME}.ics"
+ LOGGER.warning(
+ "The ART Trier source is deprecated and might not work with all addresses anymore."
+ " Please use the ICS instead: https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/ics/art_trier_de.md"
+ )
+ url = f"{API_URL}/{self._zip_code}:{self._district}::@{REMINDER_DAY}-{REMINDER_TIME}.ics"
res = requests.get(url)
res.raise_for_status()
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py\n@@ -1,13 +1,11 @@\n-import contextlib\n-from datetime import datetime\n-from typing import Optional\n+import logging\n from urllib.parse import quote\n \n import requests\n from waste_collection_schedule import Collection # type: ignore[attr-defined]\n from waste_collection_schedule.service.ICS import ICS\n \n-TITLE = \"ART Trier\"\n+TITLE = \"ART Trier (Depreciated)\"\n DESCRIPTION = \"Source for waste collection of ART Trier.\"\n URL = \"https://www.art-trier.de\"\n TEST_CASES = {\n@@ -56,6 +54,7 @@\n \".\": None,\n }\n )\n+LOGGER = logging.getLogger(__name__)\n \n \n class Source:\n@@ -67,7 +66,11 @@\n self._ics = ICS(regex=r\"^A.R.T. Abfuhrtermin: (.*)\", split_at=r\" & \")\n \n def fetch(self):\n- url = f\"{API_URL}/{self._zip_code}_{self._district}_{REMINDER_DAY}-{REMINDER_TIME}.ics\"\n+ LOGGER.warning(\n+ \"The ART Trier source is deprecated and might not work with all addresses anymore.\"\n+ \" Please use the ICS instead: https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/ics/art_trier_de.md\"\n+ )\n+ url = f\"{API_URL}/{self._zip_code}:{self._district}::@{REMINDER_DAY}-{REMINDER_TIME}.ics\"\n \n res = requests.get(url)\n res.raise_for_status()\n", "issue": "[Bug]: ART Trier Germany collecting no more Data\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nART Trier Germany collecting no more Data. It worked till yesterday. I think they have a new homepage.\r\nThe Calender is now empty, only one Entry on February 26th: A.R.T. Wichtiger Hinweis!\r\nThe link (https://www.art-trier.de/cms/abfuhrtermine-1002.html) in the Description for ART Trier doesn't work anymore. Get a 404 Error Page.\r\n\r\nVer. 1.45.1\n\n### Source (if relevant)\n\nart_trier_de\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n```YAML\n- name: art_trier_de\r\n args:\r\n district: \"Fellerich\"\r\n zip_code: \"54456\"\n```\n\n\n### Checklist Source Error\n\n- [ ] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [ ] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import contextlib\nfrom datetime import datetime\nfrom typing import Optional\nfrom urllib.parse import quote\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"ART Trier\"\nDESCRIPTION = \"Source for waste collection of ART Trier.\"\nURL = \"https://www.art-trier.de\"\nTEST_CASES = {\n \"Trier\": {\n \"zip_code\": \"54296\",\n \"district\": \"Stadt Trier, Universit\u00e4tsring\",\n }, # # https://www.art-trier.de/ics-feed/54296_trier_universitaetsring_1-1800.ics\n \"Schweich\": {\n \"zip_code\": \"54338\",\n \"district\": \"Schweich (inkl. Issel)\",\n }, # https://www.art-trier.de/ics-feed/54338_schweich_inkl_issel_1-1800.ics\n \"Dreis\": {\n \"zip_code\": \"54518\",\n \"district\": \"Dreis\",\n }, # https://www.art-trier.de/ics-feed/54518_dreis_1-1800.ics\n \"Wittlich Marktplatz\": {\n \"zip_code\": \"54516\",\n \"district\": \"Wittlich, Marktplatz\",\n }, # https://www.art-trier.de/ics-feed/54516_wittlich_marktplatz_1-1800.ics\n \"Wittlich Wengerohr\": {\n \"zip_code\": \"54516\",\n \"district\": \"Wittlich-Wengerohr\",\n }, # https://www.art-trier.de/ics-feed/54516_wittlich%2Dwengerohr_1-1800.ics\n}\n\nAPI_URL = \"https://www.art-trier.de/ics-feed\"\nREMINDER_DAY = (\n \"0\" # The calendar event should be on the same day as the waste collection\n)\nREMINDER_TIME = \"0600\" # The calendar event should start on any hour of the correct day, so this does not matter much\nICON_MAP = {\n \"Altpapier\": \"mdi:package-variant\",\n \"Restm\u00fcll\": \"mdi:trash-can\",\n \"Gelber Sack\": \"mdi:recycle\",\n}\nSPECIAL_CHARS = str.maketrans(\n {\n \" \": \"_\",\n \"\u00e4\": \"ae\",\n \"\u00fc\": \"ue\",\n \"\u00f6\": \"oe\",\n \"\u00df\": \"ss\",\n \"(\": None,\n \")\": None,\n \",\": None,\n \".\": None,\n }\n)\n\n\nclass Source:\n def __init__(self, district: str, zip_code: str):\n self._district = quote(\n district.lower().removeprefix(\"stadt \").translate(SPECIAL_CHARS).strip()\n )\n self._zip_code = zip_code\n self._ics = ICS(regex=r\"^A.R.T. Abfuhrtermin: (.*)\", split_at=r\" & \")\n\n def fetch(self):\n url = f\"{API_URL}/{self._zip_code}_{self._district}_{REMINDER_DAY}-{REMINDER_TIME}.ics\"\n\n res = requests.get(url)\n res.raise_for_status()\n\n schedule = self._ics.convert(res.text)\n\n return [\n Collection(date=entry[0], t=entry[1], icon=ICON_MAP.get(entry[1]))\n for entry in schedule\n ]\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py"}], "after_files": [{"content": "import logging\nfrom urllib.parse import quote\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"ART Trier (Depreciated)\"\nDESCRIPTION = \"Source for waste collection of ART Trier.\"\nURL = \"https://www.art-trier.de\"\nTEST_CASES = {\n \"Trier\": {\n \"zip_code\": \"54296\",\n \"district\": \"Stadt Trier, Universit\u00e4tsring\",\n }, # # https://www.art-trier.de/ics-feed/54296_trier_universitaetsring_1-1800.ics\n \"Schweich\": {\n \"zip_code\": \"54338\",\n \"district\": \"Schweich (inkl. Issel)\",\n }, # https://www.art-trier.de/ics-feed/54338_schweich_inkl_issel_1-1800.ics\n \"Dreis\": {\n \"zip_code\": \"54518\",\n \"district\": \"Dreis\",\n }, # https://www.art-trier.de/ics-feed/54518_dreis_1-1800.ics\n \"Wittlich Marktplatz\": {\n \"zip_code\": \"54516\",\n \"district\": \"Wittlich, Marktplatz\",\n }, # https://www.art-trier.de/ics-feed/54516_wittlich_marktplatz_1-1800.ics\n \"Wittlich Wengerohr\": {\n \"zip_code\": \"54516\",\n \"district\": \"Wittlich-Wengerohr\",\n }, # https://www.art-trier.de/ics-feed/54516_wittlich%2Dwengerohr_1-1800.ics\n}\n\nAPI_URL = \"https://www.art-trier.de/ics-feed\"\nREMINDER_DAY = (\n \"0\" # The calendar event should be on the same day as the waste collection\n)\nREMINDER_TIME = \"0600\" # The calendar event should start on any hour of the correct day, so this does not matter much\nICON_MAP = {\n \"Altpapier\": \"mdi:package-variant\",\n \"Restm\u00fcll\": \"mdi:trash-can\",\n \"Gelber Sack\": \"mdi:recycle\",\n}\nSPECIAL_CHARS = str.maketrans(\n {\n \" \": \"_\",\n \"\u00e4\": \"ae\",\n \"\u00fc\": \"ue\",\n \"\u00f6\": \"oe\",\n \"\u00df\": \"ss\",\n \"(\": None,\n \")\": None,\n \",\": None,\n \".\": None,\n }\n)\nLOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, district: str, zip_code: str):\n self._district = quote(\n district.lower().removeprefix(\"stadt \").translate(SPECIAL_CHARS).strip()\n )\n self._zip_code = zip_code\n self._ics = ICS(regex=r\"^A.R.T. Abfuhrtermin: (.*)\", split_at=r\" & \")\n\n def fetch(self):\n LOGGER.warning(\n \"The ART Trier source is deprecated and might not work with all addresses anymore.\"\n \" Please use the ICS instead: https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/ics/art_trier_de.md\"\n )\n url = f\"{API_URL}/{self._zip_code}:{self._district}::@{REMINDER_DAY}-{REMINDER_TIME}.ics\"\n\n res = requests.get(url)\n res.raise_for_status()\n\n schedule = self._ics.convert(res.text)\n\n return [\n Collection(date=entry[0], t=entry[1], icon=ICON_MAP.get(entry[1]))\n for entry in schedule\n ]\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/art_trier_de.py"}]} | 1,625 | 402 |
gh_patches_debug_9267 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1480 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
random.shuffle's random= argument got deprecated
Related issue: [bpo-40465](https://bugs.python.org/issue40465).
```
black..................................................................../home/isidentical/.venv/lib/python3.10/site-packages/pre_commit/languages/helpers.py:95: DeprecationWarning: The *random* parameter to shuffle() has been deprecated
since Python 3.9 and will be removed in a subsequent version.
random.shuffle(seq, random=fixed_random.random)
Passed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/helpers.py`
Content:
```
1 import multiprocessing
2 import os
3 import random
4 from typing import Any
5 from typing import List
6 from typing import Optional
7 from typing import overload
8 from typing import Sequence
9 from typing import Tuple
10 from typing import TYPE_CHECKING
11
12 import pre_commit.constants as C
13 from pre_commit.hook import Hook
14 from pre_commit.prefix import Prefix
15 from pre_commit.util import cmd_output_b
16 from pre_commit.xargs import xargs
17
18 if TYPE_CHECKING:
19 from typing import NoReturn
20
21 FIXED_RANDOM_SEED = 1542676186
22
23
24 def run_setup_cmd(prefix: Prefix, cmd: Tuple[str, ...]) -> None:
25 cmd_output_b(*cmd, cwd=prefix.prefix_dir)
26
27
28 @overload
29 def environment_dir(d: None, language_version: str) -> None: ...
30 @overload
31 def environment_dir(d: str, language_version: str) -> str: ...
32
33
34 def environment_dir(d: Optional[str], language_version: str) -> Optional[str]:
35 if d is None:
36 return None
37 else:
38 return f'{d}-{language_version}'
39
40
41 def assert_version_default(binary: str, version: str) -> None:
42 if version != C.DEFAULT:
43 raise AssertionError(
44 f'For now, pre-commit requires system-installed {binary}',
45 )
46
47
48 def assert_no_additional_deps(
49 lang: str,
50 additional_deps: Sequence[str],
51 ) -> None:
52 if additional_deps:
53 raise AssertionError(
54 f'For now, pre-commit does not support '
55 f'additional_dependencies for {lang}',
56 )
57
58
59 def basic_get_default_version() -> str:
60 return C.DEFAULT
61
62
63 def basic_healthy(prefix: Prefix, language_version: str) -> bool:
64 return True
65
66
67 def no_install(
68 prefix: Prefix,
69 version: str,
70 additional_dependencies: Sequence[str],
71 ) -> 'NoReturn':
72 raise AssertionError('This type is not installable')
73
74
75 def target_concurrency(hook: Hook) -> int:
76 if hook.require_serial or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:
77 return 1
78 else:
79 # Travis appears to have a bunch of CPUs, but we can't use them all.
80 if 'TRAVIS' in os.environ:
81 return 2
82 else:
83 try:
84 return multiprocessing.cpu_count()
85 except NotImplementedError:
86 return 1
87
88
89 def _shuffled(seq: Sequence[str]) -> List[str]:
90 """Deterministically shuffle"""
91 fixed_random = random.Random()
92 fixed_random.seed(FIXED_RANDOM_SEED, version=1)
93
94 seq = list(seq)
95 random.shuffle(seq, random=fixed_random.random)
96 return seq
97
98
99 def run_xargs(
100 hook: Hook,
101 cmd: Tuple[str, ...],
102 file_args: Sequence[str],
103 **kwargs: Any,
104 ) -> Tuple[int, bytes]:
105 # Shuffle the files so that they more evenly fill out the xargs partitions,
106 # but do it deterministically in case a hook cares about ordering.
107 file_args = _shuffled(file_args)
108 kwargs['target_concurrency'] = target_concurrency(hook)
109 return xargs(cmd, file_args, **kwargs)
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py
--- a/pre_commit/languages/helpers.py
+++ b/pre_commit/languages/helpers.py
@@ -18,7 +18,7 @@
if TYPE_CHECKING:
from typing import NoReturn
-FIXED_RANDOM_SEED = 1542676186
+FIXED_RANDOM_SEED = 1542676187
def run_setup_cmd(prefix: Prefix, cmd: Tuple[str, ...]) -> None:
@@ -92,7 +92,7 @@
fixed_random.seed(FIXED_RANDOM_SEED, version=1)
seq = list(seq)
- random.shuffle(seq, random=fixed_random.random)
+ fixed_random.shuffle(seq)
return seq
| {"golden_diff": "diff --git a/pre_commit/languages/helpers.py b/pre_commit/languages/helpers.py\n--- a/pre_commit/languages/helpers.py\n+++ b/pre_commit/languages/helpers.py\n@@ -18,7 +18,7 @@\n if TYPE_CHECKING:\n from typing import NoReturn\n \n-FIXED_RANDOM_SEED = 1542676186\n+FIXED_RANDOM_SEED = 1542676187\n \n \n def run_setup_cmd(prefix: Prefix, cmd: Tuple[str, ...]) -> None:\n@@ -92,7 +92,7 @@\n fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n \n seq = list(seq)\n- random.shuffle(seq, random=fixed_random.random)\n+ fixed_random.shuffle(seq)\n return seq\n", "issue": "random.shuffle's random= argument got deprecated\nRelated issue: [bpo-40465](https://bugs.python.org/issue40465).\r\n```\r\nblack..................................................................../home/isidentical/.venv/lib/python3.10/site-packages/pre_commit/languages/helpers.py:95: DeprecationWarning: The *random* parameter to shuffle() has been deprecated\r\nsince Python 3.9 and will be removed in a subsequent version.\r\n random.shuffle(seq, random=fixed_random.random)\r\nPassed\r\n```\r\n\r\n\n", "before_files": [{"content": "import multiprocessing\nimport os\nimport random\nfrom typing import Any\nfrom typing import List\nfrom typing import Optional\nfrom typing import overload\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import TYPE_CHECKING\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.xargs import xargs\n\nif TYPE_CHECKING:\n from typing import NoReturn\n\nFIXED_RANDOM_SEED = 1542676186\n\n\ndef run_setup_cmd(prefix: Prefix, cmd: Tuple[str, ...]) -> None:\n cmd_output_b(*cmd, cwd=prefix.prefix_dir)\n\n\n@overload\ndef environment_dir(d: None, language_version: str) -> None: ...\n@overload\ndef environment_dir(d: str, language_version: str) -> str: ...\n\n\ndef environment_dir(d: Optional[str], language_version: str) -> Optional[str]:\n if d is None:\n return None\n else:\n return f'{d}-{language_version}'\n\n\ndef assert_version_default(binary: str, version: str) -> None:\n if version != C.DEFAULT:\n raise AssertionError(\n f'For now, pre-commit requires system-installed {binary}',\n )\n\n\ndef assert_no_additional_deps(\n lang: str,\n additional_deps: Sequence[str],\n) -> None:\n if additional_deps:\n raise AssertionError(\n f'For now, pre-commit does not support '\n f'additional_dependencies for {lang}',\n )\n\n\ndef basic_get_default_version() -> str:\n return C.DEFAULT\n\n\ndef basic_healthy(prefix: Prefix, language_version: str) -> bool:\n return True\n\n\ndef no_install(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n) -> 'NoReturn':\n raise AssertionError('This type is not installable')\n\n\ndef target_concurrency(hook: Hook) -> int:\n if hook.require_serial or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:\n return 1\n else:\n # Travis appears to have a bunch of CPUs, but we can't use them all.\n if 'TRAVIS' in os.environ:\n return 2\n else:\n try:\n return multiprocessing.cpu_count()\n except NotImplementedError:\n return 1\n\n\ndef _shuffled(seq: Sequence[str]) -> List[str]:\n \"\"\"Deterministically shuffle\"\"\"\n fixed_random = random.Random()\n fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n\n seq = list(seq)\n random.shuffle(seq, random=fixed_random.random)\n return seq\n\n\ndef run_xargs(\n hook: Hook,\n cmd: Tuple[str, ...],\n file_args: Sequence[str],\n **kwargs: Any,\n) -> Tuple[int, bytes]:\n # Shuffle the files so that they more evenly fill out the xargs partitions,\n # but do it deterministically in case a hook cares about ordering.\n file_args = _shuffled(file_args)\n kwargs['target_concurrency'] = target_concurrency(hook)\n return xargs(cmd, file_args, **kwargs)\n", "path": "pre_commit/languages/helpers.py"}], "after_files": [{"content": "import multiprocessing\nimport os\nimport random\nfrom typing import Any\nfrom typing import List\nfrom typing import Optional\nfrom typing import overload\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import TYPE_CHECKING\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.xargs import xargs\n\nif TYPE_CHECKING:\n from typing import NoReturn\n\nFIXED_RANDOM_SEED = 1542676187\n\n\ndef run_setup_cmd(prefix: Prefix, cmd: Tuple[str, ...]) -> None:\n cmd_output_b(*cmd, cwd=prefix.prefix_dir)\n\n\n@overload\ndef environment_dir(d: None, language_version: str) -> None: ...\n@overload\ndef environment_dir(d: str, language_version: str) -> str: ...\n\n\ndef environment_dir(d: Optional[str], language_version: str) -> Optional[str]:\n if d is None:\n return None\n else:\n return f'{d}-{language_version}'\n\n\ndef assert_version_default(binary: str, version: str) -> None:\n if version != C.DEFAULT:\n raise AssertionError(\n f'For now, pre-commit requires system-installed {binary}',\n )\n\n\ndef assert_no_additional_deps(\n lang: str,\n additional_deps: Sequence[str],\n) -> None:\n if additional_deps:\n raise AssertionError(\n f'For now, pre-commit does not support '\n f'additional_dependencies for {lang}',\n )\n\n\ndef basic_get_default_version() -> str:\n return C.DEFAULT\n\n\ndef basic_healthy(prefix: Prefix, language_version: str) -> bool:\n return True\n\n\ndef no_install(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n) -> 'NoReturn':\n raise AssertionError('This type is not installable')\n\n\ndef target_concurrency(hook: Hook) -> int:\n if hook.require_serial or 'PRE_COMMIT_NO_CONCURRENCY' in os.environ:\n return 1\n else:\n # Travis appears to have a bunch of CPUs, but we can't use them all.\n if 'TRAVIS' in os.environ:\n return 2\n else:\n try:\n return multiprocessing.cpu_count()\n except NotImplementedError:\n return 1\n\n\ndef _shuffled(seq: Sequence[str]) -> List[str]:\n \"\"\"Deterministically shuffle\"\"\"\n fixed_random = random.Random()\n fixed_random.seed(FIXED_RANDOM_SEED, version=1)\n\n seq = list(seq)\n fixed_random.shuffle(seq)\n return seq\n\n\ndef run_xargs(\n hook: Hook,\n cmd: Tuple[str, ...],\n file_args: Sequence[str],\n **kwargs: Any,\n) -> Tuple[int, bytes]:\n # Shuffle the files so that they more evenly fill out the xargs partitions,\n # but do it deterministically in case a hook cares about ordering.\n file_args = _shuffled(file_args)\n kwargs['target_concurrency'] = target_concurrency(hook)\n return xargs(cmd, file_args, **kwargs)\n", "path": "pre_commit/languages/helpers.py"}]} | 1,278 | 174 |
gh_patches_debug_54807 | rasdani/github-patches | git_diff | certbot__certbot-311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lost Vagrant docs
ce2a6b7c5a3549ce5a1a04fb33d4254f78bd1b1f removed documentation for Vagrant in `CONTRIBUTING.rst`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Let's Encrypt documentation build configuration file, created by
4 # sphinx-quickstart on Sun Nov 23 20:35:21 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import codecs
16 import os
17 import re
18 import sys
19
20 import mock
21
22
23 # http://docs.readthedocs.org/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules
24 # c.f. #262
25 sys.modules.update(
26 (mod_name, mock.MagicMock()) for mod_name in ['augeas', 'M2Crypto'])
27
28 here = os.path.abspath(os.path.dirname(__file__))
29
30 # read version number (and other metadata) from package init
31 init_fn = os.path.join(here, '..', 'letsencrypt', '__init__.py')
32 with codecs.open(init_fn, encoding='utf8') as fd:
33 meta = dict(re.findall(r"""__([a-z]+)__ = "([^"]+)""", fd.read()))
34
35 # If extensions (or modules to document with autodoc) are in another directory,
36 # add these directories to sys.path here. If the directory is relative to the
37 # documentation root, use os.path.abspath to make it absolute, like shown here.
38 sys.path.insert(0, os.path.abspath(os.path.join(here, '..')))
39
40 # -- General configuration ------------------------------------------------
41
42 # If your documentation needs a minimal Sphinx version, state it here.
43 #needs_sphinx = '1.0'
44
45 # Add any Sphinx extension module names here, as strings. They can be
46 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
47 # ones.
48 extensions = [
49 'sphinx.ext.autodoc',
50 'sphinx.ext.intersphinx',
51 'sphinx.ext.todo',
52 'sphinx.ext.coverage',
53 'sphinx.ext.viewcode',
54 'repoze.sphinx.autointerface',
55 ]
56
57 autodoc_member_order = 'bysource'
58 autodoc_default_flags = ['show-inheritance', 'private-members']
59
60 # Add any paths that contain templates here, relative to this directory.
61 templates_path = ['_templates']
62
63 # The suffix of source filenames.
64 source_suffix = '.rst'
65
66 # The encoding of source files.
67 #source_encoding = 'utf-8-sig'
68
69 # The master toctree document.
70 master_doc = 'index'
71
72 # General information about the project.
73 project = u'Let\'s Encrypt'
74 copyright = u'2014, Let\'s Encrypt Project'
75
76 # The version info for the project you're documenting, acts as replacement for
77 # |version| and |release|, also used in various other places throughout the
78 # built documents.
79 #
80 # The short X.Y version.
81 version = '.'.join(meta['version'].split('.')[:2])
82 # The full version, including alpha/beta/rc tags.
83 release = meta['version']
84
85 # The language for content autogenerated by Sphinx. Refer to documentation
86 # for a list of supported languages.
87 #
88 # This is also used if you do content translation via gettext catalogs.
89 # Usually you set "language" from the command line for these cases.
90 language = None
91
92 # There are two options for replacing |today|: either, you set today to some
93 # non-false value, then it is used:
94 #today = ''
95 # Else, today_fmt is used as the format for a strftime call.
96 #today_fmt = '%B %d, %Y'
97
98 # List of patterns, relative to source directory, that match files and
99 # directories to ignore when looking for source files.
100 exclude_patterns = ['_build']
101
102 # The reST default role (used for this markup: `text`) to use for all
103 # documents.
104 #default_role = None
105
106 # If true, '()' will be appended to :func: etc. cross-reference text.
107 #add_function_parentheses = True
108
109 # If true, the current module name will be prepended to all description
110 # unit titles (such as .. function::).
111 #add_module_names = True
112
113 # If true, sectionauthor and moduleauthor directives will be shown in the
114 # output. They are ignored by default.
115 #show_authors = False
116
117 # The name of the Pygments (syntax highlighting) style to use.
118 pygments_style = 'sphinx'
119
120 # A list of ignored prefixes for module index sorting.
121 #modindex_common_prefix = []
122
123 # If true, keep warnings as "system message" paragraphs in the built documents.
124 #keep_warnings = False
125
126
127 # -- Options for HTML output ----------------------------------------------
128
129 # The theme to use for HTML and HTML Help pages. See the documentation for
130 # a list of builtin themes.
131
132 # http://docs.readthedocs.org/en/latest/theme.html#how-do-i-use-this-locally-and-on-read-the-docs
133 # on_rtd is whether we are on readthedocs.org
134 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
135 if not on_rtd: # only import and set the theme if we're building docs locally
136 import sphinx_rtd_theme
137 html_theme = 'sphinx_rtd_theme'
138 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
139 # otherwise, readthedocs.org uses their theme by default, so no need to specify it
140
141 # Theme options are theme-specific and customize the look and feel of a theme
142 # further. For a list of options available for each theme, see the
143 # documentation.
144 #html_theme_options = {}
145
146 # Add any paths that contain custom themes here, relative to this directory.
147 #html_theme_path = []
148
149 # The name for this set of Sphinx documents. If None, it defaults to
150 # "<project> v<release> documentation".
151 #html_title = None
152
153 # A shorter title for the navigation bar. Default is the same as html_title.
154 #html_short_title = None
155
156 # The name of an image file (relative to this directory) to place at the top
157 # of the sidebar.
158 #html_logo = None
159
160 # The name of an image file (within the static path) to use as favicon of the
161 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
162 # pixels large.
163 #html_favicon = None
164
165 # Add any paths that contain custom static files (such as style sheets) here,
166 # relative to this directory. They are copied after the builtin static files,
167 # so a file named "default.css" will overwrite the builtin "default.css".
168 html_static_path = ['_static']
169
170 # Add any extra paths that contain custom files (such as robots.txt or
171 # .htaccess) here, relative to this directory. These files are copied
172 # directly to the root of the documentation.
173 #html_extra_path = []
174
175 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
176 # using the given strftime format.
177 #html_last_updated_fmt = '%b %d, %Y'
178
179 # If true, SmartyPants will be used to convert quotes and dashes to
180 # typographically correct entities.
181 #html_use_smartypants = True
182
183 # Custom sidebar templates, maps document names to template names.
184 #html_sidebars = {}
185
186 # Additional templates that should be rendered to pages, maps page names to
187 # template names.
188 #html_additional_pages = {}
189
190 # If false, no module index is generated.
191 #html_domain_indices = True
192
193 # If false, no index is generated.
194 #html_use_index = True
195
196 # If true, the index is split into individual pages for each letter.
197 #html_split_index = False
198
199 # If true, links to the reST sources are added to the pages.
200 #html_show_sourcelink = True
201
202 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
203 #html_show_sphinx = True
204
205 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
206 #html_show_copyright = True
207
208 # If true, an OpenSearch description file will be output, and all pages will
209 # contain a <link> tag referring to it. The value of this option must be the
210 # base URL from which the finished HTML is served.
211 #html_use_opensearch = ''
212
213 # This is the file name suffix for HTML files (e.g. ".xhtml").
214 #html_file_suffix = None
215
216 # Language to be used for generating the HTML full-text search index.
217 # Sphinx supports the following languages:
218 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
219 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
220 #html_search_language = 'en'
221
222 # A dictionary with options for the search language support, empty by default.
223 # Now only 'ja' uses this config value
224 #html_search_options = {'type': 'default'}
225
226 # The name of a javascript file (relative to the configuration directory) that
227 # implements a search results scorer. If empty, the default will be used.
228 #html_search_scorer = 'scorer.js'
229
230 # Output file base name for HTML help builder.
231 htmlhelp_basename = 'LetsEncryptdoc'
232
233 # -- Options for LaTeX output ---------------------------------------------
234
235 latex_elements = {
236 # The paper size ('letterpaper' or 'a4paper').
237 #'papersize': 'letterpaper',
238
239 # The font size ('10pt', '11pt' or '12pt').
240 #'pointsize': '10pt',
241
242 # Additional stuff for the LaTeX preamble.
243 #'preamble': '',
244
245 # Latex figure (float) alignment
246 #'figure_align': 'htbp',
247 }
248
249 # Grouping the document tree into LaTeX files. List of tuples
250 # (source start file, target name, title,
251 # author, documentclass [howto, manual, or own class]).
252 latex_documents = [
253 ('index', 'LetsEncrypt.tex', u'Let\'s Encrypt Documentation',
254 u'Let\'s Encrypt Project', 'manual'),
255 ]
256
257 # The name of an image file (relative to this directory) to place at the top of
258 # the title page.
259 #latex_logo = None
260
261 # For "manual" documents, if this is true, then toplevel headings are parts,
262 # not chapters.
263 #latex_use_parts = False
264
265 # If true, show page references after internal links.
266 #latex_show_pagerefs = False
267
268 # If true, show URL addresses after external links.
269 #latex_show_urls = False
270
271 # Documents to append as an appendix to all manuals.
272 #latex_appendices = []
273
274 # If false, no module index is generated.
275 #latex_domain_indices = True
276
277
278 # -- Options for manual page output ---------------------------------------
279
280 # One entry per manual page. List of tuples
281 # (source start file, name, description, authors, manual section).
282 man_pages = [
283 ('index', 'letsencrypt', u'Let\'s Encrypt Documentation',
284 [u'Let\'s Encrypt Project'], 1)
285 ]
286
287 # If true, show URL addresses after external links.
288 #man_show_urls = False
289
290
291 # -- Options for Texinfo output -------------------------------------------
292
293 # Grouping the document tree into Texinfo files. List of tuples
294 # (source start file, target name, title, author,
295 # dir menu entry, description, category)
296 texinfo_documents = [
297 ('index', 'LetsEncrypt', u'Let\'s Encrypt Documentation',
298 u'Let\'s Encrypt Project', 'LetsEncrypt', 'One line description of project.',
299 'Miscellaneous'),
300 ]
301
302 # Documents to append as an appendix to all manuals.
303 #texinfo_appendices = []
304
305 # If false, no module index is generated.
306 #texinfo_domain_indices = True
307
308 # How to display URL addresses: 'footnote', 'no', or 'inline'.
309 #texinfo_show_urls = 'footnote'
310
311 # If true, do not generate a @detailmenu in the "Top" node's menu.
312 #texinfo_no_detailmenu = False
313
314
315 # Example configuration for intersphinx: refer to the Python standard library.
316 intersphinx_mapping = {'http://docs.python.org/': None}
317
318 todo_include_todos = True
319
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -101,7 +101,7 @@
# The reST default role (used for this markup: `text`) to use for all
# documents.
-#default_role = None
+default_role = 'py:obj'
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -101,7 +101,7 @@\n \n # The reST default role (used for this markup: `text`) to use for all\n # documents.\n-#default_role = None\n+default_role = 'py:obj'\n \n # If true, '()' will be appended to :func: etc. cross-reference text.\n #add_function_parentheses = True\n", "issue": "Lost Vagrant docs\nce2a6b7c5a3549ce5a1a04fb33d4254f78bd1b1f removed documentation for Vagrant in `CONTRIBUTING.rst`\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Let's Encrypt documentation build configuration file, created by\n# sphinx-quickstart on Sun Nov 23 20:35:21 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport codecs\nimport os\nimport re\nimport sys\n\nimport mock\n\n\n# http://docs.readthedocs.org/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules\n# c.f. #262\nsys.modules.update(\n (mod_name, mock.MagicMock()) for mod_name in ['augeas', 'M2Crypto'])\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, '..', 'letsencrypt', '__init__.py')\nwith codecs.open(init_fn, encoding='utf8') as fd:\n meta = dict(re.findall(r\"\"\"__([a-z]+)__ = \"([^\"]+)\"\"\", fd.read()))\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(os.path.join(here, '..')))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'repoze.sphinx.autointerface',\n]\n\nautodoc_member_order = 'bysource'\nautodoc_default_flags = ['show-inheritance', 'private-members']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Let\\'s Encrypt'\ncopyright = u'2014, Let\\'s Encrypt Project'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '.'.join(meta['version'].split('.')[:2])\n# The full version, including alpha/beta/rc tags.\nrelease = meta['version']\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\n# http://docs.readthedocs.org/en/latest/theme.html#how-do-i-use-this-locally-and-on-read-the-docs\n# on_rtd is whether we are on readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n# otherwise, readthedocs.org uses their theme by default, so no need to specify it\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'LetsEncryptdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'LetsEncrypt.tex', u'Let\\'s Encrypt Documentation',\n u'Let\\'s Encrypt Project', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'letsencrypt', u'Let\\'s Encrypt Documentation',\n [u'Let\\'s Encrypt Project'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'LetsEncrypt', u'Let\\'s Encrypt Documentation',\n u'Let\\'s Encrypt Project', 'LetsEncrypt', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n\ntodo_include_todos = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Let's Encrypt documentation build configuration file, created by\n# sphinx-quickstart on Sun Nov 23 20:35:21 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport codecs\nimport os\nimport re\nimport sys\n\nimport mock\n\n\n# http://docs.readthedocs.org/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules\n# c.f. #262\nsys.modules.update(\n (mod_name, mock.MagicMock()) for mod_name in ['augeas', 'M2Crypto'])\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, '..', 'letsencrypt', '__init__.py')\nwith codecs.open(init_fn, encoding='utf8') as fd:\n meta = dict(re.findall(r\"\"\"__([a-z]+)__ = \"([^\"]+)\"\"\", fd.read()))\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(os.path.join(here, '..')))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n 'repoze.sphinx.autointerface',\n]\n\nautodoc_member_order = 'bysource'\nautodoc_default_flags = ['show-inheritance', 'private-members']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Let\\'s Encrypt'\ncopyright = u'2014, Let\\'s Encrypt Project'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '.'.join(meta['version'].split('.')[:2])\n# The full version, including alpha/beta/rc tags.\nrelease = meta['version']\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\ndefault_role = 'py:obj'\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n\n# http://docs.readthedocs.org/en/latest/theme.html#how-do-i-use-this-locally-and-on-read-the-docs\n# on_rtd is whether we are on readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n# otherwise, readthedocs.org uses their theme by default, so no need to specify it\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'LetsEncryptdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'LetsEncrypt.tex', u'Let\\'s Encrypt Documentation',\n u'Let\\'s Encrypt Project', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'letsencrypt', u'Let\\'s Encrypt Documentation',\n [u'Let\\'s Encrypt Project'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'LetsEncrypt', u'Let\\'s Encrypt Documentation',\n u'Let\\'s Encrypt Project', 'LetsEncrypt', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n\ntodo_include_todos = True\n", "path": "docs/conf.py"}]} | 3,851 | 107 |
gh_patches_debug_20264 | rasdani/github-patches | git_diff | svthalia__concrexit-3089 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Admin sales shift API should also return total_paid_revenue
### Is your feature request related to a problem? Please describe.
The current admin sales shift api route only gives the total_revenue for a shift, but this might contain unpaid orders. We don't want those in certain scoreboards, like for the rag week.
### Describe the solution you'd like
Add `total_paid_revenue`
### Motivation
### Describe alternatives you've considered
### Additional context
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/api/v2/admin/serializers/shift.py`
Content:
```
1 from rest_framework import serializers
2
3 from sales.models.product import ProductListItem
4 from sales.models.shift import Shift
5
6
7 class ProductListItemSerializer(serializers.ModelSerializer):
8 """Serializer for product list items."""
9
10 class Meta:
11 model = ProductListItem
12 fields = ("name", "price", "age_restricted")
13 read_only_fields = ("name", "price", "age_restricted")
14
15 name = serializers.SerializerMethodField("_name")
16 age_restricted = serializers.SerializerMethodField("_age_restricted")
17
18 def _name(self, instance):
19 return instance.product.name
20
21 def _age_restricted(self, instance):
22 return instance.product.age_restricted
23
24
25 class ShiftSerializer(serializers.ModelSerializer):
26 """Serializer for shifts."""
27
28 class Meta:
29 model = Shift
30 fields = (
31 "pk",
32 "title",
33 "locked",
34 "active",
35 "start",
36 "end",
37 "products",
38 "total_revenue",
39 "num_orders",
40 "product_sales",
41 )
42
43 total_revenue = serializers.DecimalField(
44 max_digits=10, decimal_places=2, min_value=0, read_only=True
45 )
46
47 products = ProductListItemSerializer(
48 source="product_list.product_items", many=True, read_only=True
49 )
50
51 title = serializers.SerializerMethodField("_get_title")
52
53 def _get_title(self, instance):
54 return instance.title
55
56 product_sales = serializers.JSONField()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/sales/api/v2/admin/serializers/shift.py b/website/sales/api/v2/admin/serializers/shift.py
--- a/website/sales/api/v2/admin/serializers/shift.py
+++ b/website/sales/api/v2/admin/serializers/shift.py
@@ -1,5 +1,6 @@
from rest_framework import serializers
+from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer
from sales.models.product import ProductListItem
from sales.models.shift import Shift
@@ -36,13 +37,13 @@
"end",
"products",
"total_revenue",
+ "total_revenue_paid",
"num_orders",
"product_sales",
)
- total_revenue = serializers.DecimalField(
- max_digits=10, decimal_places=2, min_value=0, read_only=True
- )
+ total_revenue = PaymentAmountSerializer(min_value=0, read_only=True)
+ total_revenue_paid = PaymentAmountSerializer(min_value=0, read_only=True)
products = ProductListItemSerializer(
source="product_list.product_items", many=True, read_only=True
| {"golden_diff": "diff --git a/website/sales/api/v2/admin/serializers/shift.py b/website/sales/api/v2/admin/serializers/shift.py\n--- a/website/sales/api/v2/admin/serializers/shift.py\n+++ b/website/sales/api/v2/admin/serializers/shift.py\n@@ -1,5 +1,6 @@\n from rest_framework import serializers\n \n+from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\n from sales.models.product import ProductListItem\n from sales.models.shift import Shift\n \n@@ -36,13 +37,13 @@\n \"end\",\n \"products\",\n \"total_revenue\",\n+ \"total_revenue_paid\",\n \"num_orders\",\n \"product_sales\",\n )\n \n- total_revenue = serializers.DecimalField(\n- max_digits=10, decimal_places=2, min_value=0, read_only=True\n- )\n+ total_revenue = PaymentAmountSerializer(min_value=0, read_only=True)\n+ total_revenue_paid = PaymentAmountSerializer(min_value=0, read_only=True)\n \n products = ProductListItemSerializer(\n source=\"product_list.product_items\", many=True, read_only=True\n", "issue": "Admin sales shift API should also return total_paid_revenue\n### Is your feature request related to a problem? Please describe.\r\nThe current admin sales shift api route only gives the total_revenue for a shift, but this might contain unpaid orders. We don't want those in certain scoreboards, like for the rag week.\r\n\r\n### Describe the solution you'd like\r\nAdd `total_paid_revenue`\r\n\r\n### Motivation\r\n\r\n### Describe alternatives you've considered\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom sales.models.product import ProductListItem\nfrom sales.models.shift import Shift\n\n\nclass ProductListItemSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for product list items.\"\"\"\n\n class Meta:\n model = ProductListItem\n fields = (\"name\", \"price\", \"age_restricted\")\n read_only_fields = (\"name\", \"price\", \"age_restricted\")\n\n name = serializers.SerializerMethodField(\"_name\")\n age_restricted = serializers.SerializerMethodField(\"_age_restricted\")\n\n def _name(self, instance):\n return instance.product.name\n\n def _age_restricted(self, instance):\n return instance.product.age_restricted\n\n\nclass ShiftSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for shifts.\"\"\"\n\n class Meta:\n model = Shift\n fields = (\n \"pk\",\n \"title\",\n \"locked\",\n \"active\",\n \"start\",\n \"end\",\n \"products\",\n \"total_revenue\",\n \"num_orders\",\n \"product_sales\",\n )\n\n total_revenue = serializers.DecimalField(\n max_digits=10, decimal_places=2, min_value=0, read_only=True\n )\n\n products = ProductListItemSerializer(\n source=\"product_list.product_items\", many=True, read_only=True\n )\n\n title = serializers.SerializerMethodField(\"_get_title\")\n\n def _get_title(self, instance):\n return instance.title\n\n product_sales = serializers.JSONField()\n", "path": "website/sales/api/v2/admin/serializers/shift.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom sales.models.product import ProductListItem\nfrom sales.models.shift import Shift\n\n\nclass ProductListItemSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for product list items.\"\"\"\n\n class Meta:\n model = ProductListItem\n fields = (\"name\", \"price\", \"age_restricted\")\n read_only_fields = (\"name\", \"price\", \"age_restricted\")\n\n name = serializers.SerializerMethodField(\"_name\")\n age_restricted = serializers.SerializerMethodField(\"_age_restricted\")\n\n def _name(self, instance):\n return instance.product.name\n\n def _age_restricted(self, instance):\n return instance.product.age_restricted\n\n\nclass ShiftSerializer(serializers.ModelSerializer):\n \"\"\"Serializer for shifts.\"\"\"\n\n class Meta:\n model = Shift\n fields = (\n \"pk\",\n \"title\",\n \"locked\",\n \"active\",\n \"start\",\n \"end\",\n \"products\",\n \"total_revenue\",\n \"total_revenue_paid\",\n \"num_orders\",\n \"product_sales\",\n )\n\n total_revenue = PaymentAmountSerializer(min_value=0, read_only=True)\n total_revenue_paid = PaymentAmountSerializer(min_value=0, read_only=True)\n\n products = ProductListItemSerializer(\n source=\"product_list.product_items\", many=True, read_only=True\n )\n\n title = serializers.SerializerMethodField(\"_get_title\")\n\n def _get_title(self, instance):\n return instance.title\n\n product_sales = serializers.JSONField()\n", "path": "website/sales/api/v2/admin/serializers/shift.py"}]} | 775 | 257 |
gh_patches_debug_20662 | rasdani/github-patches | git_diff | lightly-ai__lightly-758 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unusual error on lightly-1.2.12
Getting this error:
```
Epoch 19: 100% 430/430 [17:42<00:00, 2.47s/it, loss=2.05, v_num=0]
Best model is stored at: /content/lightly_outputs/2022-04-04/12-01-48/lightly_epoch_18.ckpt
########## Starting to embed your dataset.
Error executing job with overrides: ['token=min', 'dataset_id=mine', 'input_dir=/content/drive/MyDrive/data/mine/', 'trainer.max_epochs=20']
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/lightly/cli/lightly_cli.py", line 114, in lightly_cli
return _lightly_cli(cfg)
File "/usr/local/lib/python3.7/dist-packages/lightly/cli/lightly_cli.py", line 60, in _lightly_cli
embeddings = _embed_cli(cfg, is_cli_call)
File "/usr/local/lib/python3.7/dist-packages/lightly/cli/embed_cli.py", line 83, in _embed_cli
embeddings, labels, filenames = encoder.embed(dataloader, device=device)
File "/usr/local/lib/python3.7/dist-packages/lightly/embedding/embedding.py", line 113, in embed
total=len(dataloader.dataset),
AttributeError: 'BackgroundGenerator' object has no attribute 'dataset'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
```
There are jpgs in `/content/drive/MyDrive/data/mine/`
Token/dataset_ide correct
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/embedding/embedding.py`
Content:
```
1 """ Embedding Strategies """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 import time
7 from typing import List, Union, Tuple
8
9 import numpy as np
10 import torch
11 import lightly
12 from lightly.embedding._base import BaseEmbedding
13 from tqdm import tqdm
14
15 from lightly.utils.reordering import sort_items_by_keys
16
17 if lightly._is_prefetch_generator_available():
18 from prefetch_generator import BackgroundGenerator
19
20
21 class SelfSupervisedEmbedding(BaseEmbedding):
22 """Implementation of self-supervised embedding models.
23
24 Implements an embedding strategy based on self-supervised learning. A
25 model backbone, self-supervised criterion, optimizer, and dataloader are
26 passed to the constructor. The embedding itself is a pytorch-lightning
27 module.
28
29 The implementation is based on contrastive learning.
30
31 * SimCLR: https://arxiv.org/abs/2002.05709
32 * MoCo: https://arxiv.org/abs/1911.05722
33 * SimSiam: https://arxiv.org/abs/2011.10566
34
35 Attributes:
36 model:
37 A backbone convolutional network with a projection head.
38 criterion:
39 A contrastive loss function.
40 optimizer:
41 A PyTorch optimizer.
42 dataloader:
43 A torchvision dataloader.
44 scheduler:
45 A PyTorch learning rate scheduler.
46
47 Examples:
48 >>> # define a model, criterion, optimizer, and dataloader above
49 >>> import lightly.embedding as embedding
50 >>> encoder = SelfSupervisedEmbedding(
51 >>> model,
52 >>> criterion,
53 >>> optimizer,
54 >>> dataloader,
55 >>> )
56 >>> # train the self-supervised embedding with default settings
57 >>> encoder.train_embedding()
58 >>> # pass pytorch-lightning trainer arguments as kwargs
59 >>> encoder.train_embedding(max_epochs=10)
60
61 """
62
63 def __init__(
64 self,
65 model: torch.nn.Module,
66 criterion: torch.nn.Module,
67 optimizer: torch.optim.Optimizer,
68 dataloader: torch.utils.data.DataLoader,
69 scheduler=None,
70 ):
71
72 super(SelfSupervisedEmbedding, self).__init__(
73 model, criterion, optimizer, dataloader, scheduler
74 )
75
76 def embed(self,
77 dataloader: torch.utils.data.DataLoader,
78 device: torch.device = None
79 ) -> Tuple[np.ndarray, np.ndarray, List[str]]:
80 """Embeds images in a vector space.
81
82 Args:
83 dataloader:
84 A PyTorch dataloader.
85 device:
86 Selected device (`cpu`, `cuda`, see PyTorch documentation)
87
88 Returns:
89 Tuple of (embeddings, labels, filenames) ordered by the
90 samples in the dataset of the dataloader.
91 embeddings:
92 Embedding of shape (n_samples, embedding_feature_size).
93 One embedding for each sample.
94 labels:
95 Labels of shape (n_samples, ).
96 filenames:
97 The filenames from dataloader.dataset.get_filenames().
98
99
100 Examples:
101 >>> # embed images in vector space
102 >>> embeddings, labels, fnames = encoder.embed(dataloader)
103
104 """
105
106 self.model.eval()
107 embeddings, labels, filenames = None, None, []
108
109 if lightly._is_prefetch_generator_available():
110 dataloader = BackgroundGenerator(dataloader, max_prefetch=3)
111
112 pbar = tqdm(
113 total=len(dataloader.dataset),
114 unit='imgs'
115 )
116
117 efficiency = 0.0
118 embeddings = []
119 labels = []
120 with torch.no_grad():
121
122 start_timepoint = time.time()
123 for (image_batch, label_batch, filename_batch) in dataloader:
124
125 batch_size = image_batch.shape[0]
126
127 # the following 2 lines are needed to prevent a file handler leak,
128 # see https://github.com/lightly-ai/lightly/pull/676
129 image_batch = image_batch.to(device)
130 label_batch = label_batch.clone()
131
132 filenames += [*filename_batch]
133
134 prepared_timepoint = time.time()
135
136 embedding_batch = self.model.backbone(image_batch)
137 embedding_batch = embedding_batch.detach().reshape(batch_size, -1)
138
139 embeddings.append(embedding_batch)
140 labels.append(label_batch)
141
142 finished_timepoint = time.time()
143
144 data_loading_time = prepared_timepoint - start_timepoint
145 inference_time = finished_timepoint - prepared_timepoint
146 total_batch_time = data_loading_time + inference_time
147
148 efficiency = inference_time / total_batch_time
149 pbar.set_description("Compute efficiency: {:.2f}".format(efficiency))
150 start_timepoint = time.time()
151
152 pbar.update(batch_size)
153
154 embeddings = torch.cat(embeddings, 0)
155 labels = torch.cat(labels, 0)
156
157 embeddings = embeddings.cpu().numpy()
158 labels = labels.cpu().numpy()
159
160 sorted_filenames = dataloader.dataset.get_filenames()
161 sorted_embeddings = sort_items_by_keys(
162 filenames, embeddings, sorted_filenames
163 )
164 sorted_labels = sort_items_by_keys(
165 filenames, labels, sorted_filenames
166 )
167 embeddings = np.stack(sorted_embeddings)
168 labels = np.stack(sorted_labels)
169
170 return embeddings, labels, sorted_filenames
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightly/embedding/embedding.py b/lightly/embedding/embedding.py
--- a/lightly/embedding/embedding.py
+++ b/lightly/embedding/embedding.py
@@ -106,11 +106,12 @@
self.model.eval()
embeddings, labels, filenames = None, None, []
+ dataset = dataloader.dataset
if lightly._is_prefetch_generator_available():
dataloader = BackgroundGenerator(dataloader, max_prefetch=3)
pbar = tqdm(
- total=len(dataloader.dataset),
+ total=len(dataset),
unit='imgs'
)
@@ -157,7 +158,7 @@
embeddings = embeddings.cpu().numpy()
labels = labels.cpu().numpy()
- sorted_filenames = dataloader.dataset.get_filenames()
+ sorted_filenames = dataset.get_filenames()
sorted_embeddings = sort_items_by_keys(
filenames, embeddings, sorted_filenames
)
| {"golden_diff": "diff --git a/lightly/embedding/embedding.py b/lightly/embedding/embedding.py\n--- a/lightly/embedding/embedding.py\n+++ b/lightly/embedding/embedding.py\n@@ -106,11 +106,12 @@\n self.model.eval()\n embeddings, labels, filenames = None, None, []\n \n+ dataset = dataloader.dataset\n if lightly._is_prefetch_generator_available():\n dataloader = BackgroundGenerator(dataloader, max_prefetch=3)\n \n pbar = tqdm(\n- total=len(dataloader.dataset),\n+ total=len(dataset),\n unit='imgs'\n )\n \n@@ -157,7 +158,7 @@\n embeddings = embeddings.cpu().numpy()\n labels = labels.cpu().numpy()\n \n- sorted_filenames = dataloader.dataset.get_filenames()\n+ sorted_filenames = dataset.get_filenames()\n sorted_embeddings = sort_items_by_keys(\n filenames, embeddings, sorted_filenames\n )\n", "issue": "Unusual error on lightly-1.2.12\nGetting this error:\r\n\r\n```\r\nEpoch 19: 100% 430/430 [17:42<00:00, 2.47s/it, loss=2.05, v_num=0]\r\nBest model is stored at: /content/lightly_outputs/2022-04-04/12-01-48/lightly_epoch_18.ckpt\r\n########## Starting to embed your dataset.\r\n\r\nError executing job with overrides: ['token=min', 'dataset_id=mine', 'input_dir=/content/drive/MyDrive/data/mine/', 'trainer.max_epochs=20']\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/lightly/cli/lightly_cli.py\", line 114, in lightly_cli\r\n return _lightly_cli(cfg)\r\n File \"/usr/local/lib/python3.7/dist-packages/lightly/cli/lightly_cli.py\", line 60, in _lightly_cli\r\n embeddings = _embed_cli(cfg, is_cli_call)\r\n File \"/usr/local/lib/python3.7/dist-packages/lightly/cli/embed_cli.py\", line 83, in _embed_cli\r\n embeddings, labels, filenames = encoder.embed(dataloader, device=device)\r\n File \"/usr/local/lib/python3.7/dist-packages/lightly/embedding/embedding.py\", line 113, in embed\r\n total=len(dataloader.dataset),\r\nAttributeError: 'BackgroundGenerator' object has no attribute 'dataset'\r\n\r\nSet the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.\r\n```\r\n\r\nThere are jpgs in `/content/drive/MyDrive/data/mine/`\r\nToken/dataset_ide correct\n", "before_files": [{"content": "\"\"\" Embedding Strategies \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport time\nfrom typing import List, Union, Tuple\n\nimport numpy as np\nimport torch\nimport lightly\nfrom lightly.embedding._base import BaseEmbedding\nfrom tqdm import tqdm\n\nfrom lightly.utils.reordering import sort_items_by_keys\n\nif lightly._is_prefetch_generator_available():\n from prefetch_generator import BackgroundGenerator\n\n\nclass SelfSupervisedEmbedding(BaseEmbedding):\n \"\"\"Implementation of self-supervised embedding models.\n\n Implements an embedding strategy based on self-supervised learning. A\n model backbone, self-supervised criterion, optimizer, and dataloader are\n passed to the constructor. The embedding itself is a pytorch-lightning\n module.\n\n The implementation is based on contrastive learning.\n\n * SimCLR: https://arxiv.org/abs/2002.05709\n * MoCo: https://arxiv.org/abs/1911.05722\n * SimSiam: https://arxiv.org/abs/2011.10566\n\n Attributes:\n model:\n A backbone convolutional network with a projection head.\n criterion:\n A contrastive loss function.\n optimizer:\n A PyTorch optimizer.\n dataloader:\n A torchvision dataloader.\n scheduler:\n A PyTorch learning rate scheduler.\n\n Examples:\n >>> # define a model, criterion, optimizer, and dataloader above\n >>> import lightly.embedding as embedding\n >>> encoder = SelfSupervisedEmbedding(\n >>> model,\n >>> criterion,\n >>> optimizer,\n >>> dataloader,\n >>> )\n >>> #\u00a0train the self-supervised embedding with default settings\n >>> encoder.train_embedding()\n >>> #\u00a0pass pytorch-lightning trainer arguments as kwargs\n >>> encoder.train_embedding(max_epochs=10)\n\n \"\"\"\n\n def __init__(\n self,\n model: torch.nn.Module,\n criterion: torch.nn.Module,\n optimizer: torch.optim.Optimizer,\n dataloader: torch.utils.data.DataLoader,\n scheduler=None,\n ):\n\n super(SelfSupervisedEmbedding, self).__init__(\n model, criterion, optimizer, dataloader, scheduler\n )\n\n def embed(self,\n dataloader: torch.utils.data.DataLoader,\n device: torch.device = None\n ) -> Tuple[np.ndarray, np.ndarray, List[str]]:\n \"\"\"Embeds images in a vector space.\n\n Args:\n dataloader:\n A PyTorch dataloader.\n device:\n Selected device (`cpu`, `cuda`, see PyTorch documentation)\n\n Returns:\n Tuple of (embeddings, labels, filenames) ordered by the\n samples in the dataset of the dataloader.\n embeddings:\n Embedding of shape (n_samples, embedding_feature_size).\n One embedding for each sample.\n labels:\n Labels of shape (n_samples, ).\n filenames:\n The filenames from dataloader.dataset.get_filenames().\n\n\n Examples:\n >>> # embed images in vector space\n >>> embeddings, labels, fnames = encoder.embed(dataloader)\n\n \"\"\"\n\n self.model.eval()\n embeddings, labels, filenames = None, None, []\n\n if lightly._is_prefetch_generator_available():\n dataloader = BackgroundGenerator(dataloader, max_prefetch=3)\n \n pbar = tqdm(\n total=len(dataloader.dataset),\n unit='imgs'\n )\n\n efficiency = 0.0\n embeddings = []\n labels = []\n with torch.no_grad():\n\n start_timepoint = time.time()\n for (image_batch, label_batch, filename_batch) in dataloader:\n\n batch_size = image_batch.shape[0]\n\n # the following 2 lines are needed to prevent a file handler leak,\n # see https://github.com/lightly-ai/lightly/pull/676\n image_batch = image_batch.to(device)\n label_batch = label_batch.clone()\n\n filenames += [*filename_batch]\n\n prepared_timepoint = time.time()\n\n embedding_batch = self.model.backbone(image_batch)\n embedding_batch = embedding_batch.detach().reshape(batch_size, -1)\n\n embeddings.append(embedding_batch)\n labels.append(label_batch)\n\n finished_timepoint = time.time()\n\n data_loading_time = prepared_timepoint - start_timepoint\n inference_time = finished_timepoint - prepared_timepoint\n total_batch_time = data_loading_time + inference_time\n\n efficiency = inference_time / total_batch_time\n pbar.set_description(\"Compute efficiency: {:.2f}\".format(efficiency))\n start_timepoint = time.time()\n\n pbar.update(batch_size)\n\n embeddings = torch.cat(embeddings, 0)\n labels = torch.cat(labels, 0)\n\n embeddings = embeddings.cpu().numpy()\n labels = labels.cpu().numpy()\n\n sorted_filenames = dataloader.dataset.get_filenames()\n sorted_embeddings = sort_items_by_keys(\n filenames, embeddings, sorted_filenames\n )\n sorted_labels = sort_items_by_keys(\n filenames, labels, sorted_filenames\n )\n embeddings = np.stack(sorted_embeddings)\n labels = np.stack(sorted_labels)\n\n return embeddings, labels, sorted_filenames\n", "path": "lightly/embedding/embedding.py"}], "after_files": [{"content": "\"\"\" Embedding Strategies \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport time\nfrom typing import List, Union, Tuple\n\nimport numpy as np\nimport torch\nimport lightly\nfrom lightly.embedding._base import BaseEmbedding\nfrom tqdm import tqdm\n\nfrom lightly.utils.reordering import sort_items_by_keys\n\nif lightly._is_prefetch_generator_available():\n from prefetch_generator import BackgroundGenerator\n\n\nclass SelfSupervisedEmbedding(BaseEmbedding):\n \"\"\"Implementation of self-supervised embedding models.\n\n Implements an embedding strategy based on self-supervised learning. A\n model backbone, self-supervised criterion, optimizer, and dataloader are\n passed to the constructor. The embedding itself is a pytorch-lightning\n module.\n\n The implementation is based on contrastive learning.\n\n * SimCLR: https://arxiv.org/abs/2002.05709\n * MoCo: https://arxiv.org/abs/1911.05722\n * SimSiam: https://arxiv.org/abs/2011.10566\n\n Attributes:\n model:\n A backbone convolutional network with a projection head.\n criterion:\n A contrastive loss function.\n optimizer:\n A PyTorch optimizer.\n dataloader:\n A torchvision dataloader.\n scheduler:\n A PyTorch learning rate scheduler.\n\n Examples:\n >>> # define a model, criterion, optimizer, and dataloader above\n >>> import lightly.embedding as embedding\n >>> encoder = SelfSupervisedEmbedding(\n >>> model,\n >>> criterion,\n >>> optimizer,\n >>> dataloader,\n >>> )\n >>> #\u00a0train the self-supervised embedding with default settings\n >>> encoder.train_embedding()\n >>> #\u00a0pass pytorch-lightning trainer arguments as kwargs\n >>> encoder.train_embedding(max_epochs=10)\n\n \"\"\"\n\n def __init__(\n self,\n model: torch.nn.Module,\n criterion: torch.nn.Module,\n optimizer: torch.optim.Optimizer,\n dataloader: torch.utils.data.DataLoader,\n scheduler=None,\n ):\n\n super(SelfSupervisedEmbedding, self).__init__(\n model, criterion, optimizer, dataloader, scheduler\n )\n\n def embed(self,\n dataloader: torch.utils.data.DataLoader,\n device: torch.device = None\n ) -> Tuple[np.ndarray, np.ndarray, List[str]]:\n \"\"\"Embeds images in a vector space.\n\n Args:\n dataloader:\n A PyTorch dataloader.\n device:\n Selected device (`cpu`, `cuda`, see PyTorch documentation)\n\n Returns:\n Tuple of (embeddings, labels, filenames) ordered by the\n samples in the dataset of the dataloader.\n embeddings:\n Embedding of shape (n_samples, embedding_feature_size).\n One embedding for each sample.\n labels:\n Labels of shape (n_samples, ).\n filenames:\n The filenames from dataloader.dataset.get_filenames().\n\n\n Examples:\n >>> # embed images in vector space\n >>> embeddings, labels, fnames = encoder.embed(dataloader)\n\n \"\"\"\n\n self.model.eval()\n embeddings, labels, filenames = None, None, []\n\n dataset = dataloader.dataset\n if lightly._is_prefetch_generator_available():\n dataloader = BackgroundGenerator(dataloader, max_prefetch=3)\n \n pbar = tqdm(\n total=len(dataset),\n unit='imgs'\n )\n\n efficiency = 0.0\n embeddings = []\n labels = []\n with torch.no_grad():\n\n start_timepoint = time.time()\n for (image_batch, label_batch, filename_batch) in dataloader:\n\n batch_size = image_batch.shape[0]\n\n # the following 2 lines are needed to prevent a file handler leak,\n # see https://github.com/lightly-ai/lightly/pull/676\n image_batch = image_batch.to(device)\n label_batch = label_batch.clone()\n\n filenames += [*filename_batch]\n\n prepared_timepoint = time.time()\n\n embedding_batch = self.model.backbone(image_batch)\n embedding_batch = embedding_batch.detach().reshape(batch_size, -1)\n\n embeddings.append(embedding_batch)\n labels.append(label_batch)\n\n finished_timepoint = time.time()\n\n data_loading_time = prepared_timepoint - start_timepoint\n inference_time = finished_timepoint - prepared_timepoint\n total_batch_time = data_loading_time + inference_time\n\n efficiency = inference_time / total_batch_time\n pbar.set_description(\"Compute efficiency: {:.2f}\".format(efficiency))\n start_timepoint = time.time()\n\n pbar.update(batch_size)\n\n embeddings = torch.cat(embeddings, 0)\n labels = torch.cat(labels, 0)\n\n embeddings = embeddings.cpu().numpy()\n labels = labels.cpu().numpy()\n\n sorted_filenames = dataset.get_filenames()\n sorted_embeddings = sort_items_by_keys(\n filenames, embeddings, sorted_filenames\n )\n sorted_labels = sort_items_by_keys(\n filenames, labels, sorted_filenames\n )\n embeddings = np.stack(sorted_embeddings)\n labels = np.stack(sorted_labels)\n\n return embeddings, labels, sorted_filenames\n", "path": "lightly/embedding/embedding.py"}]} | 2,194 | 209 |
gh_patches_debug_24892 | rasdani/github-patches | git_diff | deepset-ai__haystack-1284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crawler does not write JSON, but serializes the result dict to a string written in a .json file
First of all, great work on Haystack! It’s an incredible library and I really enjoy playing around with it!
I noticed an odd behavior of the Crawler as in this example:
```python
from haystack.connector import Crawler
crawler = Crawler()
# crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/
docs = crawler.crawl(urls=["https://haystack.deepset.ai/docs/latest/get_startedmd"],
output_dir="crawled_files",
filter_urls= ["haystack\.deepset\.ai\/docs\/"])
```
The resulting file looks like this:
```
{'meta': {'url': 'https://haystack.deepset.ai/docs/latest/get_startedmd'}, 'text': 'Knowledge...
```
which is not valid JSON, as described in the docs. The Crawler rather simply serializes the data dict to string and writes it into a JSON file.
A working import to load the result in a next step looks like this:
```python
import ast
# docs[0] being the first result from the crawl run
with open(docs[0], 'r', encoding='utf-8') as f:
result = ast.literal_eval(f.read())
```
instead of `json.read( ... )`.
As I don’t have the overview of the entire lib and how the created text files are used across the different pipelines, I am hesitant to use propose a solution. So I am raising this as a slightly odd behavior for now. Happy to provide a fix though given guidance from other developers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/connector/crawler.py`
Content:
```
1 import logging
2 import re
3 from pathlib import Path
4 from urllib.parse import urlparse
5 from typing import List, Any, Optional, Dict, Tuple, Union
6 from haystack.schema import Document, BaseComponent
7 logger = logging.getLogger(__name__)
8
9
10 class Crawler(BaseComponent):
11 """
12 Crawl texts from a website so that we can use them later in Haystack as a corpus for search / question answering etc.
13
14 **Example:**
15 ```python
16 | from haystack.connector import Crawler
17 |
18 | crawler = Crawler()
19 | # crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/
20 | docs = crawler.crawl(urls=["https://haystack.deepset.ai/docs/latest/get_startedmd"],
21 | output_dir="crawled_files",
22 | filter_urls= ["haystack\.deepset\.ai\/docs\/"])
23 ```
24 """
25
26 outgoing_edges = 1
27
28 def __init__(self, output_dir: str, urls: Optional[List[str]] = None, crawler_depth: int = 1,
29 filter_urls: Optional[List] = None, overwrite_existing_files=True):
30 """
31 Init object with basic params for crawling (can be overwritten later).
32
33 :param output_dir: Path for the directory to store files
34 :param urls: List of http(s) address(es) (can also be supplied later when calling crawl())
35 :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:
36 0: Only initial list of urls
37 1: Follow links found on the initial URLs (but no further)
38 :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.
39 All URLs not matching at least one of the regular expressions will be dropped.
40 :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content
41 """
42 try:
43 from webdriver_manager.chrome import ChromeDriverManager
44 except ImportError:
45 raise ImportError("Can't find package `webdriver-manager` \n"
46 "You can install it via `pip install webdriver-manager`")
47
48 try:
49 from selenium import webdriver
50 except ImportError:
51 raise ImportError("Can't find package `selenium` \n"
52 "You can install it via `pip install selenium`")
53
54 options = webdriver.chrome.options.Options()
55 options.add_argument('--headless')
56 self.driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)
57 self.urls = urls
58 self.output_dir = output_dir
59 self.crawler_depth = crawler_depth
60 self.filter_urls = filter_urls
61 self.overwrite_existing_files = overwrite_existing_files
62
63 def crawl(self, output_dir: Union[str, Path, None] = None,
64 urls: Optional[List[str]] = None,
65 crawler_depth: Optional[int] = None,
66 filter_urls: Optional[List] = None,
67 overwrite_existing_files: Optional[bool] = None) -> List[Path]:
68 """
69 Craw URL(s), extract the text from the HTML, create a Haystack Document object out of it and save it (one JSON
70 file per URL, including text and basic meta data).
71 You can optionally specify via `filter_urls` to only crawl URLs that match a certain pattern.
72 All parameters are optional here and only meant to overwrite instance attributes at runtime.
73 If no parameters are provided to this method, the instance attributes that were passed during __init__ will be used.
74
75 :param output_dir: Path for the directory to store files
76 :param urls: List of http addresses or single http address
77 :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:
78 0: Only initial list of urls
79 1: Follow links found on the initial URLs (but no further)
80 :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.
81 All URLs not matching at least one of the regular expressions will be dropped.
82 :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content
83
84 :return: List of paths where the crawled webpages got stored
85 """
86 # use passed params or fallback to instance attributes
87 urls = urls or self.urls
88 if urls is None:
89 raise ValueError("Got no urls to crawl. Set `urls` to a list of URLs in __init__(), crawl() or run(). `")
90 output_dir = output_dir or self.output_dir
91 filter_urls = filter_urls or self.filter_urls
92 if overwrite_existing_files is None:
93 overwrite_existing_files = self.overwrite_existing_files
94 if crawler_depth is None:
95 crawler_depth = self.crawler_depth
96
97 output_dir = Path(output_dir)
98 if not output_dir.exists():
99 output_dir.mkdir(parents=True)
100
101 is_not_empty = len(list(output_dir.rglob("*"))) > 0
102 if is_not_empty and not overwrite_existing_files:
103 logger.info(
104 f"Found data stored in `{output_dir}`. Delete this first if you really want to fetch new data."
105 )
106 return []
107 else:
108 logger.info(f"Fetching from {urls} to `{output_dir}`")
109
110 filepaths = []
111
112 sub_links: Dict[str, List] = {}
113
114 # don't go beyond the initial list of urls
115 if crawler_depth == 0:
116 filepaths += self._write_to_files(urls, output_dir=output_dir)
117 # follow one level of sublinks
118 elif crawler_depth == 1:
119 for url_ in urls:
120 existed_links: List = list(sum(list(sub_links.values()), []))
121 sub_links[url_] = list(self._extract_sublinks_from_url(base_url=url_, filter_urls=filter_urls,
122 existed_links=existed_links))
123 for url in sub_links:
124 filepaths += self._write_to_files(sub_links[url], output_dir=output_dir, base_url=url)
125
126 return filepaths
127
128 def _write_to_files(self, urls: List[str], output_dir: Path, base_url: str = None) -> List[Path]:
129 paths = []
130 for link in urls:
131 logger.info(f"writing contents from `{link}`")
132 self.driver.get(link)
133 el = self.driver.find_element_by_tag_name('body')
134 text = el.text
135
136 link_split_values = link.replace('https://', '').split('/')
137 file_name = f"{'_'.join(link_split_values)}.json"
138 file_path = output_dir / file_name
139
140 data = {}
141 data['meta'] = {'url': link}
142 if base_url:
143 data['meta']['base_url'] = base_url
144 data['text'] = text
145 with open(file_path, 'w', encoding='utf-8') as f:
146 f.write(str(data))
147 paths.append(file_path)
148
149 return paths
150
151 def run(self, output_dir: Union[str, Path, None] = None, urls: Optional[List[str]] = None, # type: ignore
152 crawler_depth: Optional[int] = None, filter_urls: Optional[List] = None, # type: ignore
153 overwrite_existing_files: Optional[bool] = None, **kwargs) -> Tuple[Dict, str]: # type: ignore
154 """
155 Method to be executed when the Crawler is used as a Node within a Haystack pipeline.
156
157 :param output_dir: Path for the directory to store files
158 :param urls: List of http addresses or single http address
159 :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:
160 0: Only initial list of urls
161 1: Follow links found on the initial URLs (but no further)
162 :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.
163 All URLs not matching at least one of the regular expressions will be dropped.
164 :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content
165
166 :return: Tuple({"paths": List of filepaths, ...}, Name of output edge)
167 """
168
169 filepaths = self.crawl(urls=urls, output_dir=output_dir, crawler_depth=crawler_depth, filter_urls=filter_urls,
170 overwrite_existing_files=overwrite_existing_files)
171 results = {"paths": filepaths}
172 results.update(**kwargs)
173 return results, "output_1"
174
175 @staticmethod
176 def _is_internal_url(base_url: str, sub_link: str) -> bool:
177 base_url_ = urlparse(base_url)
178 sub_link_ = urlparse(sub_link)
179 return base_url_.scheme == sub_link_.scheme and base_url_.netloc == sub_link_.netloc
180
181 @staticmethod
182 def _is_inpage_navigation(base_url: str, sub_link: str) -> bool:
183 base_url_ = urlparse(base_url)
184 sub_link_ = urlparse(sub_link)
185 return base_url_.path == sub_link_.path and base_url_.netloc == sub_link_.netloc
186
187 def _extract_sublinks_from_url(self, base_url: str,
188 filter_urls: Optional[List] = None,
189 existed_links: List = None) -> set:
190 self.driver.get(base_url)
191 a_elements = self.driver.find_elements_by_tag_name('a')
192 sub_links = set()
193 if not (existed_links and base_url in existed_links):
194 if filter_urls:
195 if re.compile('|'.join(filter_urls)).search(base_url):
196 sub_links.add(base_url)
197
198 for i in a_elements:
199 sub_link = i.get_attribute('href')
200 if not (existed_links and sub_link in existed_links):
201 if self._is_internal_url(base_url=base_url, sub_link=sub_link) \
202 and (not self._is_inpage_navigation(base_url=base_url, sub_link=sub_link)):
203 if filter_urls:
204 if re.compile('|'.join(filter_urls)).search(sub_link):
205 sub_links.add(sub_link)
206 else:
207 sub_links.add(sub_link)
208
209 return sub_links
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/haystack/connector/crawler.py b/haystack/connector/crawler.py
--- a/haystack/connector/crawler.py
+++ b/haystack/connector/crawler.py
@@ -1,3 +1,4 @@
+import json
import logging
import re
from pathlib import Path
@@ -15,10 +16,9 @@
```python
| from haystack.connector import Crawler
|
- | crawler = Crawler()
+ | crawler = Crawler(output_dir="crawled_files")
| # crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/
| docs = crawler.crawl(urls=["https://haystack.deepset.ai/docs/latest/get_startedmd"],
- | output_dir="crawled_files",
| filter_urls= ["haystack\.deepset\.ai\/docs\/"])
```
"""
@@ -143,7 +143,7 @@
data['meta']['base_url'] = base_url
data['text'] = text
with open(file_path, 'w', encoding='utf-8') as f:
- f.write(str(data))
+ json.dump(data, f)
paths.append(file_path)
return paths
| {"golden_diff": "diff --git a/haystack/connector/crawler.py b/haystack/connector/crawler.py\n--- a/haystack/connector/crawler.py\n+++ b/haystack/connector/crawler.py\n@@ -1,3 +1,4 @@\n+import json\n import logging\n import re\n from pathlib import Path\n@@ -15,10 +16,9 @@\n ```python\n | from haystack.connector import Crawler\n |\n- | crawler = Crawler()\n+ | crawler = Crawler(output_dir=\"crawled_files\")\n | # crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/\n | docs = crawler.crawl(urls=[\"https://haystack.deepset.ai/docs/latest/get_startedmd\"],\n- | output_dir=\"crawled_files\",\n | filter_urls= [\"haystack\\.deepset\\.ai\\/docs\\/\"])\n ```\n \"\"\"\n@@ -143,7 +143,7 @@\n data['meta']['base_url'] = base_url\n data['text'] = text\n with open(file_path, 'w', encoding='utf-8') as f:\n- f.write(str(data))\n+ json.dump(data, f)\n paths.append(file_path)\n \n return paths\n", "issue": "Crawler does not write JSON, but serializes the result dict to a string written in a .json file\nFirst of all, great work on Haystack! It\u2019s an incredible library and I really enjoy playing around with it!\r\n\r\nI noticed an odd behavior of the Crawler as in this example:\r\n\r\n```python\r\nfrom haystack.connector import Crawler\r\n\r\ncrawler = Crawler()\r\n# crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/\r\ndocs = crawler.crawl(urls=[\"https://haystack.deepset.ai/docs/latest/get_startedmd\"],\r\n output_dir=\"crawled_files\",\r\n filter_urls= [\"haystack\\.deepset\\.ai\\/docs\\/\"])\r\n```\r\n\r\nThe resulting file looks like this:\r\n\r\n```\r\n{'meta': {'url': 'https://haystack.deepset.ai/docs/latest/get_startedmd'}, 'text': 'Knowledge...\r\n```\r\n\r\nwhich is not valid JSON, as described in the docs. The Crawler rather simply serializes the data dict to string and writes it into a JSON file.\r\n\r\nA working import to load the result in a next step looks like this:\r\n\r\n```python\r\nimport ast\r\n\r\n# docs[0] being the first result from the crawl run\r\nwith open(docs[0], 'r', encoding='utf-8') as f:\r\n result = ast.literal_eval(f.read())\r\n```\r\n\r\ninstead of `json.read( ... )`.\r\n\r\nAs I don\u2019t have the overview of the entire lib and how the created text files are used across the different pipelines, I am hesitant to use propose a solution. So I am raising this as a slightly odd behavior for now. Happy to provide a fix though given guidance from other developers.\n", "before_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom urllib.parse import urlparse\nfrom typing import List, Any, Optional, Dict, Tuple, Union\nfrom haystack.schema import Document, BaseComponent\nlogger = logging.getLogger(__name__)\n\n\nclass Crawler(BaseComponent):\n \"\"\"\n Crawl texts from a website so that we can use them later in Haystack as a corpus for search / question answering etc.\n\n **Example:**\n ```python\n | from haystack.connector import Crawler\n |\n | crawler = Crawler()\n | # crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/\n | docs = crawler.crawl(urls=[\"https://haystack.deepset.ai/docs/latest/get_startedmd\"],\n | output_dir=\"crawled_files\",\n | filter_urls= [\"haystack\\.deepset\\.ai\\/docs\\/\"])\n ```\n \"\"\"\n\n outgoing_edges = 1\n\n def __init__(self, output_dir: str, urls: Optional[List[str]] = None, crawler_depth: int = 1,\n filter_urls: Optional[List] = None, overwrite_existing_files=True):\n \"\"\"\n Init object with basic params for crawling (can be overwritten later).\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http(s) address(es) (can also be supplied later when calling crawl())\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n \"\"\"\n try:\n from webdriver_manager.chrome import ChromeDriverManager\n except ImportError:\n raise ImportError(\"Can't find package `webdriver-manager` \\n\"\n \"You can install it via `pip install webdriver-manager`\")\n\n try:\n from selenium import webdriver\n except ImportError:\n raise ImportError(\"Can't find package `selenium` \\n\"\n \"You can install it via `pip install selenium`\")\n\n options = webdriver.chrome.options.Options()\n options.add_argument('--headless')\n self.driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)\n self.urls = urls\n self.output_dir = output_dir\n self.crawler_depth = crawler_depth\n self.filter_urls = filter_urls\n self.overwrite_existing_files = overwrite_existing_files\n\n def crawl(self, output_dir: Union[str, Path, None] = None,\n urls: Optional[List[str]] = None,\n crawler_depth: Optional[int] = None,\n filter_urls: Optional[List] = None,\n overwrite_existing_files: Optional[bool] = None) -> List[Path]:\n \"\"\"\n Craw URL(s), extract the text from the HTML, create a Haystack Document object out of it and save it (one JSON\n file per URL, including text and basic meta data).\n You can optionally specify via `filter_urls` to only crawl URLs that match a certain pattern.\n All parameters are optional here and only meant to overwrite instance attributes at runtime.\n If no parameters are provided to this method, the instance attributes that were passed during __init__ will be used.\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http addresses or single http address\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n\n :return: List of paths where the crawled webpages got stored\n \"\"\"\n # use passed params or fallback to instance attributes\n urls = urls or self.urls\n if urls is None:\n raise ValueError(\"Got no urls to crawl. Set `urls` to a list of URLs in __init__(), crawl() or run(). `\")\n output_dir = output_dir or self.output_dir\n filter_urls = filter_urls or self.filter_urls\n if overwrite_existing_files is None:\n overwrite_existing_files = self.overwrite_existing_files\n if crawler_depth is None:\n crawler_depth = self.crawler_depth\n\n output_dir = Path(output_dir)\n if not output_dir.exists():\n output_dir.mkdir(parents=True)\n\n is_not_empty = len(list(output_dir.rglob(\"*\"))) > 0\n if is_not_empty and not overwrite_existing_files:\n logger.info(\n f\"Found data stored in `{output_dir}`. Delete this first if you really want to fetch new data.\"\n )\n return []\n else:\n logger.info(f\"Fetching from {urls} to `{output_dir}`\")\n\n filepaths = []\n\n sub_links: Dict[str, List] = {}\n\n # don't go beyond the initial list of urls\n if crawler_depth == 0:\n filepaths += self._write_to_files(urls, output_dir=output_dir)\n # follow one level of sublinks\n elif crawler_depth == 1:\n for url_ in urls:\n existed_links: List = list(sum(list(sub_links.values()), []))\n sub_links[url_] = list(self._extract_sublinks_from_url(base_url=url_, filter_urls=filter_urls,\n existed_links=existed_links))\n for url in sub_links:\n filepaths += self._write_to_files(sub_links[url], output_dir=output_dir, base_url=url)\n\n return filepaths\n\n def _write_to_files(self, urls: List[str], output_dir: Path, base_url: str = None) -> List[Path]:\n paths = []\n for link in urls:\n logger.info(f\"writing contents from `{link}`\")\n self.driver.get(link)\n el = self.driver.find_element_by_tag_name('body')\n text = el.text\n\n link_split_values = link.replace('https://', '').split('/')\n file_name = f\"{'_'.join(link_split_values)}.json\"\n file_path = output_dir / file_name\n\n data = {}\n data['meta'] = {'url': link}\n if base_url:\n data['meta']['base_url'] = base_url\n data['text'] = text\n with open(file_path, 'w', encoding='utf-8') as f:\n f.write(str(data))\n paths.append(file_path)\n\n return paths\n\n def run(self, output_dir: Union[str, Path, None] = None, urls: Optional[List[str]] = None, # type: ignore\n crawler_depth: Optional[int] = None, filter_urls: Optional[List] = None, # type: ignore\n overwrite_existing_files: Optional[bool] = None, **kwargs) -> Tuple[Dict, str]: # type: ignore\n \"\"\"\n Method to be executed when the Crawler is used as a Node within a Haystack pipeline.\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http addresses or single http address\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n\n :return: Tuple({\"paths\": List of filepaths, ...}, Name of output edge)\n \"\"\"\n\n filepaths = self.crawl(urls=urls, output_dir=output_dir, crawler_depth=crawler_depth, filter_urls=filter_urls,\n overwrite_existing_files=overwrite_existing_files)\n results = {\"paths\": filepaths}\n results.update(**kwargs)\n return results, \"output_1\"\n\n @staticmethod\n def _is_internal_url(base_url: str, sub_link: str) -> bool:\n base_url_ = urlparse(base_url)\n sub_link_ = urlparse(sub_link)\n return base_url_.scheme == sub_link_.scheme and base_url_.netloc == sub_link_.netloc\n\n @staticmethod\n def _is_inpage_navigation(base_url: str, sub_link: str) -> bool:\n base_url_ = urlparse(base_url)\n sub_link_ = urlparse(sub_link)\n return base_url_.path == sub_link_.path and base_url_.netloc == sub_link_.netloc\n\n def _extract_sublinks_from_url(self, base_url: str,\n filter_urls: Optional[List] = None,\n existed_links: List = None) -> set:\n self.driver.get(base_url)\n a_elements = self.driver.find_elements_by_tag_name('a')\n sub_links = set()\n if not (existed_links and base_url in existed_links):\n if filter_urls:\n if re.compile('|'.join(filter_urls)).search(base_url):\n sub_links.add(base_url)\n\n for i in a_elements:\n sub_link = i.get_attribute('href')\n if not (existed_links and sub_link in existed_links):\n if self._is_internal_url(base_url=base_url, sub_link=sub_link) \\\n and (not self._is_inpage_navigation(base_url=base_url, sub_link=sub_link)):\n if filter_urls:\n if re.compile('|'.join(filter_urls)).search(sub_link):\n sub_links.add(sub_link)\n else:\n sub_links.add(sub_link)\n\n return sub_links\n", "path": "haystack/connector/crawler.py"}], "after_files": [{"content": "import json\nimport logging\nimport re\nfrom pathlib import Path\nfrom urllib.parse import urlparse\nfrom typing import List, Any, Optional, Dict, Tuple, Union\nfrom haystack.schema import Document, BaseComponent\nlogger = logging.getLogger(__name__)\n\n\nclass Crawler(BaseComponent):\n \"\"\"\n Crawl texts from a website so that we can use them later in Haystack as a corpus for search / question answering etc.\n\n **Example:**\n ```python\n | from haystack.connector import Crawler\n |\n | crawler = Crawler(output_dir=\"crawled_files\")\n | # crawl Haystack docs, i.e. all pages that include haystack.deepset.ai/docs/\n | docs = crawler.crawl(urls=[\"https://haystack.deepset.ai/docs/latest/get_startedmd\"],\n | filter_urls= [\"haystack\\.deepset\\.ai\\/docs\\/\"])\n ```\n \"\"\"\n\n outgoing_edges = 1\n\n def __init__(self, output_dir: str, urls: Optional[List[str]] = None, crawler_depth: int = 1,\n filter_urls: Optional[List] = None, overwrite_existing_files=True):\n \"\"\"\n Init object with basic params for crawling (can be overwritten later).\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http(s) address(es) (can also be supplied later when calling crawl())\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n \"\"\"\n try:\n from webdriver_manager.chrome import ChromeDriverManager\n except ImportError:\n raise ImportError(\"Can't find package `webdriver-manager` \\n\"\n \"You can install it via `pip install webdriver-manager`\")\n\n try:\n from selenium import webdriver\n except ImportError:\n raise ImportError(\"Can't find package `selenium` \\n\"\n \"You can install it via `pip install selenium`\")\n\n options = webdriver.chrome.options.Options()\n options.add_argument('--headless')\n self.driver = webdriver.Chrome(ChromeDriverManager().install(), options=options)\n self.urls = urls\n self.output_dir = output_dir\n self.crawler_depth = crawler_depth\n self.filter_urls = filter_urls\n self.overwrite_existing_files = overwrite_existing_files\n\n def crawl(self, output_dir: Union[str, Path, None] = None,\n urls: Optional[List[str]] = None,\n crawler_depth: Optional[int] = None,\n filter_urls: Optional[List] = None,\n overwrite_existing_files: Optional[bool] = None) -> List[Path]:\n \"\"\"\n Craw URL(s), extract the text from the HTML, create a Haystack Document object out of it and save it (one JSON\n file per URL, including text and basic meta data).\n You can optionally specify via `filter_urls` to only crawl URLs that match a certain pattern.\n All parameters are optional here and only meant to overwrite instance attributes at runtime.\n If no parameters are provided to this method, the instance attributes that were passed during __init__ will be used.\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http addresses or single http address\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n\n :return: List of paths where the crawled webpages got stored\n \"\"\"\n # use passed params or fallback to instance attributes\n urls = urls or self.urls\n if urls is None:\n raise ValueError(\"Got no urls to crawl. Set `urls` to a list of URLs in __init__(), crawl() or run(). `\")\n output_dir = output_dir or self.output_dir\n filter_urls = filter_urls or self.filter_urls\n if overwrite_existing_files is None:\n overwrite_existing_files = self.overwrite_existing_files\n if crawler_depth is None:\n crawler_depth = self.crawler_depth\n\n output_dir = Path(output_dir)\n if not output_dir.exists():\n output_dir.mkdir(parents=True)\n\n is_not_empty = len(list(output_dir.rglob(\"*\"))) > 0\n if is_not_empty and not overwrite_existing_files:\n logger.info(\n f\"Found data stored in `{output_dir}`. Delete this first if you really want to fetch new data.\"\n )\n return []\n else:\n logger.info(f\"Fetching from {urls} to `{output_dir}`\")\n\n filepaths = []\n\n sub_links: Dict[str, List] = {}\n\n # don't go beyond the initial list of urls\n if crawler_depth == 0:\n filepaths += self._write_to_files(urls, output_dir=output_dir)\n # follow one level of sublinks\n elif crawler_depth == 1:\n for url_ in urls:\n existed_links: List = list(sum(list(sub_links.values()), []))\n sub_links[url_] = list(self._extract_sublinks_from_url(base_url=url_, filter_urls=filter_urls,\n existed_links=existed_links))\n for url in sub_links:\n filepaths += self._write_to_files(sub_links[url], output_dir=output_dir, base_url=url)\n\n return filepaths\n\n def _write_to_files(self, urls: List[str], output_dir: Path, base_url: str = None) -> List[Path]:\n paths = []\n for link in urls:\n logger.info(f\"writing contents from `{link}`\")\n self.driver.get(link)\n el = self.driver.find_element_by_tag_name('body')\n text = el.text\n\n link_split_values = link.replace('https://', '').split('/')\n file_name = f\"{'_'.join(link_split_values)}.json\"\n file_path = output_dir / file_name\n\n data = {}\n data['meta'] = {'url': link}\n if base_url:\n data['meta']['base_url'] = base_url\n data['text'] = text\n with open(file_path, 'w', encoding='utf-8') as f:\n json.dump(data, f)\n paths.append(file_path)\n\n return paths\n\n def run(self, output_dir: Union[str, Path, None] = None, urls: Optional[List[str]] = None, # type: ignore\n crawler_depth: Optional[int] = None, filter_urls: Optional[List] = None, # type: ignore\n overwrite_existing_files: Optional[bool] = None, **kwargs) -> Tuple[Dict, str]: # type: ignore\n \"\"\"\n Method to be executed when the Crawler is used as a Node within a Haystack pipeline.\n\n :param output_dir: Path for the directory to store files\n :param urls: List of http addresses or single http address\n :param crawler_depth: How many sublinks to follow from the initial list of URLs. Current options:\n 0: Only initial list of urls\n 1: Follow links found on the initial URLs (but no further)\n :param filter_urls: Optional list of regular expressions that the crawled URLs must comply with.\n All URLs not matching at least one of the regular expressions will be dropped.\n :param overwrite_existing_files: Whether to overwrite existing files in output_dir with new content\n\n :return: Tuple({\"paths\": List of filepaths, ...}, Name of output edge)\n \"\"\"\n\n filepaths = self.crawl(urls=urls, output_dir=output_dir, crawler_depth=crawler_depth, filter_urls=filter_urls,\n overwrite_existing_files=overwrite_existing_files)\n results = {\"paths\": filepaths}\n results.update(**kwargs)\n return results, \"output_1\"\n\n @staticmethod\n def _is_internal_url(base_url: str, sub_link: str) -> bool:\n base_url_ = urlparse(base_url)\n sub_link_ = urlparse(sub_link)\n return base_url_.scheme == sub_link_.scheme and base_url_.netloc == sub_link_.netloc\n\n @staticmethod\n def _is_inpage_navigation(base_url: str, sub_link: str) -> bool:\n base_url_ = urlparse(base_url)\n sub_link_ = urlparse(sub_link)\n return base_url_.path == sub_link_.path and base_url_.netloc == sub_link_.netloc\n\n def _extract_sublinks_from_url(self, base_url: str,\n filter_urls: Optional[List] = None,\n existed_links: List = None) -> set:\n self.driver.get(base_url)\n a_elements = self.driver.find_elements_by_tag_name('a')\n sub_links = set()\n if not (existed_links and base_url in existed_links):\n if filter_urls:\n if re.compile('|'.join(filter_urls)).search(base_url):\n sub_links.add(base_url)\n\n for i in a_elements:\n sub_link = i.get_attribute('href')\n if not (existed_links and sub_link in existed_links):\n if self._is_internal_url(base_url=base_url, sub_link=sub_link) \\\n and (not self._is_inpage_navigation(base_url=base_url, sub_link=sub_link)):\n if filter_urls:\n if re.compile('|'.join(filter_urls)).search(sub_link):\n sub_links.add(sub_link)\n else:\n sub_links.add(sub_link)\n\n return sub_links\n", "path": "haystack/connector/crawler.py"}]} | 3,259 | 280 |
gh_patches_debug_40218 | rasdani/github-patches | git_diff | sopel-irc__sopel-927 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove feedparser dependency
The weather module needlessly uses `feedparser` for some things, which adds a needless (python3-incompatible) dependency. It should be done with straight XML processing, instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/weather.py`
Content:
```
1 # coding=utf8
2 """
3 weather.py - Sopel Yahoo! Weather Module
4 Copyright 2008, Sean B. Palmer, inamidst.com
5 Copyright 2012, Edward Powell, embolalia.net
6 Licensed under the Eiffel Forum License 2.
7
8 http://sopel.chat
9 """
10 from __future__ import unicode_literals
11
12 from sopel import web
13 from sopel.module import commands, example, NOLIMIT
14
15 import feedparser
16 import xmltodict
17
18
19 def woeid_search(query):
20 """
21 Find the first Where On Earth ID for the given query. Result is the etree
22 node for the result, so that location data can still be retrieved. Returns
23 None if there is no result, or the woeid field is empty.
24 """
25 query = 'q=select * from geo.placefinder where text="%s"' % query
26 body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,
27 dont_decode=True)
28 parsed = xmltodict.parse(body).get('query')
29 results = parsed.get('results')
30 if results is None or results.get('Result') is None:
31 return None
32 if type(results.get('Result')) is list:
33 return results.get('Result')[0]
34 return results.get('Result')
35
36
37 def get_cover(parsed):
38 try:
39 condition = parsed.entries[0]['yweather_condition']
40 except KeyError:
41 return 'unknown'
42 text = condition['text']
43 # code = int(condition['code'])
44 # TODO parse code to get those little icon thingies.
45 return text
46
47
48 def get_temp(parsed):
49 try:
50 condition = parsed.entries[0]['yweather_condition']
51 temp = int(condition['temp'])
52 except (KeyError, ValueError):
53 return 'unknown'
54 f = round((temp * 1.8) + 32, 2)
55 return (u'%d\u00B0C (%d\u00B0F)' % (temp, f))
56
57
58 def get_humidity(parsed):
59 try:
60 humidity = parsed['feed']['yweather_atmosphere']['humidity']
61 except (KeyError, ValueError):
62 return 'unknown'
63 return "Humidity: %s%%" % humidity
64
65
66 def get_wind(parsed):
67 try:
68 wind_data = parsed['feed']['yweather_wind']
69 kph = float(wind_data['speed'])
70 m_s = float(round(kph / 3.6, 1))
71 speed = int(round(kph / 1.852, 0))
72 degrees = int(wind_data['direction'])
73 except (KeyError, ValueError):
74 return 'unknown'
75
76 if speed < 1:
77 description = 'Calm'
78 elif speed < 4:
79 description = 'Light air'
80 elif speed < 7:
81 description = 'Light breeze'
82 elif speed < 11:
83 description = 'Gentle breeze'
84 elif speed < 16:
85 description = 'Moderate breeze'
86 elif speed < 22:
87 description = 'Fresh breeze'
88 elif speed < 28:
89 description = 'Strong breeze'
90 elif speed < 34:
91 description = 'Near gale'
92 elif speed < 41:
93 description = 'Gale'
94 elif speed < 48:
95 description = 'Strong gale'
96 elif speed < 56:
97 description = 'Storm'
98 elif speed < 64:
99 description = 'Violent storm'
100 else:
101 description = 'Hurricane'
102
103 if (degrees <= 22.5) or (degrees > 337.5):
104 degrees = u'\u2193'
105 elif (degrees > 22.5) and (degrees <= 67.5):
106 degrees = u'\u2199'
107 elif (degrees > 67.5) and (degrees <= 112.5):
108 degrees = u'\u2190'
109 elif (degrees > 112.5) and (degrees <= 157.5):
110 degrees = u'\u2196'
111 elif (degrees > 157.5) and (degrees <= 202.5):
112 degrees = u'\u2191'
113 elif (degrees > 202.5) and (degrees <= 247.5):
114 degrees = u'\u2197'
115 elif (degrees > 247.5) and (degrees <= 292.5):
116 degrees = u'\u2192'
117 elif (degrees > 292.5) and (degrees <= 337.5):
118 degrees = u'\u2198'
119
120 return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'
121
122
123 @commands('weather', 'wea')
124 @example('.weather London')
125 def weather(bot, trigger):
126 """.weather location - Show the weather at the given location."""
127
128 location = trigger.group(2)
129 woeid = ''
130 if not location:
131 woeid = bot.db.get_nick_value(trigger.nick, 'woeid')
132 if not woeid:
133 return bot.msg(trigger.sender, "I don't know where you live. " +
134 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')
135 else:
136 location = location.strip()
137 woeid = bot.db.get_nick_value(location, 'woeid')
138 if woeid is None:
139 first_result = woeid_search(location)
140 if first_result is not None:
141 woeid = first_result.get('woeid')
142
143 if not woeid:
144 return bot.reply("I don't know where that is.")
145
146 query = web.urlencode({'w': woeid, 'u': 'c'})
147 url = 'http://weather.yahooapis.com/forecastrss?' + query
148 parsed = feedparser.parse(url)
149 location = parsed['feed']['title']
150
151 cover = get_cover(parsed)
152 temp = get_temp(parsed)
153 humidity = get_humidity(parsed)
154 wind = get_wind(parsed)
155 bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))
156
157
158 @commands('setlocation', 'setwoeid')
159 @example('.setlocation Columbus, OH')
160 def update_woeid(bot, trigger):
161 """Set your default weather location."""
162 if not trigger.group(2):
163 bot.reply('Give me a location, like "Washington, DC" or "London".')
164 return NOLIMIT
165
166 first_result = woeid_search(trigger.group(2))
167 if first_result is None:
168 return bot.reply("I don't know where that is.")
169
170 woeid = first_result.get('woeid')
171
172 bot.db.set_nick_value(trigger.nick, 'woeid', woeid)
173
174 neighborhood = first_result.get('neighborhood').text or ''
175 if neighborhood:
176 neighborhood += ','
177 city = first_result.get('city') or ''
178 state = first_result.get('state') or ''
179 country = first_result.get('country') or ''
180 uzip = first_result.get('uzip') or ''
181 bot.reply('I now have you at WOEID %s (%s %s, %s, %s %s.)' %
182 (woeid, neighborhood, city, state, country, uzip))
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/weather.py b/sopel/modules/weather.py
--- a/sopel/modules/weather.py
+++ b/sopel/modules/weather.py
@@ -12,7 +12,6 @@
from sopel import web
from sopel.module import commands, example, NOLIMIT
-import feedparser
import xmltodict
@@ -36,10 +35,10 @@
def get_cover(parsed):
try:
- condition = parsed.entries[0]['yweather_condition']
+ condition = parsed['channel']['item']['yweather:condition']
except KeyError:
return 'unknown'
- text = condition['text']
+ text = condition['@text']
# code = int(condition['code'])
# TODO parse code to get those little icon thingies.
return text
@@ -47,8 +46,8 @@
def get_temp(parsed):
try:
- condition = parsed.entries[0]['yweather_condition']
- temp = int(condition['temp'])
+ condition = parsed['channel']['item']['yweather:condition']
+ temp = int(condition['@temp'])
except (KeyError, ValueError):
return 'unknown'
f = round((temp * 1.8) + 32, 2)
@@ -57,7 +56,7 @@
def get_humidity(parsed):
try:
- humidity = parsed['feed']['yweather_atmosphere']['humidity']
+ humidity = parsed['channel']['yweather:atmosphere']['@humidity']
except (KeyError, ValueError):
return 'unknown'
return "Humidity: %s%%" % humidity
@@ -65,11 +64,11 @@
def get_wind(parsed):
try:
- wind_data = parsed['feed']['yweather_wind']
- kph = float(wind_data['speed'])
+ wind_data = parsed['channel']['yweather:wind']
+ kph = float(wind_data['@speed'])
m_s = float(round(kph / 3.6, 1))
speed = int(round(kph / 1.852, 0))
- degrees = int(wind_data['direction'])
+ degrees = int(wind_data['@direction'])
except (KeyError, ValueError):
return 'unknown'
@@ -144,9 +143,10 @@
return bot.reply("I don't know where that is.")
query = web.urlencode({'w': woeid, 'u': 'c'})
- url = 'http://weather.yahooapis.com/forecastrss?' + query
- parsed = feedparser.parse(url)
- location = parsed['feed']['title']
+ raw = web.get('http://weather.yahooapis.com/forecastrss?' + query,
+ dont_decode=True)
+ parsed = xmltodict.parse(raw).get('rss')
+ location = parsed.get('channel').get('title')
cover = get_cover(parsed)
temp = get_temp(parsed)
@@ -171,7 +171,7 @@
bot.db.set_nick_value(trigger.nick, 'woeid', woeid)
- neighborhood = first_result.get('neighborhood').text or ''
+ neighborhood = first_result.get('neighborhood') or ''
if neighborhood:
neighborhood += ','
city = first_result.get('city') or ''
| {"golden_diff": "diff --git a/sopel/modules/weather.py b/sopel/modules/weather.py\n--- a/sopel/modules/weather.py\n+++ b/sopel/modules/weather.py\n@@ -12,7 +12,6 @@\n from sopel import web\n from sopel.module import commands, example, NOLIMIT\n \n-import feedparser\n import xmltodict\n \n \n@@ -36,10 +35,10 @@\n \n def get_cover(parsed):\n try:\n- condition = parsed.entries[0]['yweather_condition']\n+ condition = parsed['channel']['item']['yweather:condition']\n except KeyError:\n return 'unknown'\n- text = condition['text']\n+ text = condition['@text']\n # code = int(condition['code'])\n # TODO parse code to get those little icon thingies.\n return text\n@@ -47,8 +46,8 @@\n \n def get_temp(parsed):\n try:\n- condition = parsed.entries[0]['yweather_condition']\n- temp = int(condition['temp'])\n+ condition = parsed['channel']['item']['yweather:condition']\n+ temp = int(condition['@temp'])\n except (KeyError, ValueError):\n return 'unknown'\n f = round((temp * 1.8) + 32, 2)\n@@ -57,7 +56,7 @@\n \n def get_humidity(parsed):\n try:\n- humidity = parsed['feed']['yweather_atmosphere']['humidity']\n+ humidity = parsed['channel']['yweather:atmosphere']['@humidity']\n except (KeyError, ValueError):\n return 'unknown'\n return \"Humidity: %s%%\" % humidity\n@@ -65,11 +64,11 @@\n \n def get_wind(parsed):\n try:\n- wind_data = parsed['feed']['yweather_wind']\n- kph = float(wind_data['speed'])\n+ wind_data = parsed['channel']['yweather:wind']\n+ kph = float(wind_data['@speed'])\n m_s = float(round(kph / 3.6, 1))\n speed = int(round(kph / 1.852, 0))\n- degrees = int(wind_data['direction'])\n+ degrees = int(wind_data['@direction'])\n except (KeyError, ValueError):\n return 'unknown'\n \n@@ -144,9 +143,10 @@\n return bot.reply(\"I don't know where that is.\")\n \n query = web.urlencode({'w': woeid, 'u': 'c'})\n- url = 'http://weather.yahooapis.com/forecastrss?' + query\n- parsed = feedparser.parse(url)\n- location = parsed['feed']['title']\n+ raw = web.get('http://weather.yahooapis.com/forecastrss?' + query, \n+ dont_decode=True)\n+ parsed = xmltodict.parse(raw).get('rss')\n+ location = parsed.get('channel').get('title')\n \n cover = get_cover(parsed)\n temp = get_temp(parsed)\n@@ -171,7 +171,7 @@\n \n bot.db.set_nick_value(trigger.nick, 'woeid', woeid)\n \n- neighborhood = first_result.get('neighborhood').text or ''\n+ neighborhood = first_result.get('neighborhood') or ''\n if neighborhood:\n neighborhood += ','\n city = first_result.get('city') or ''\n", "issue": "Remove feedparser dependency\nThe weather module needlessly uses `feedparser` for some things, which adds a needless (python3-incompatible) dependency. It should be done with straight XML processing, instead.\n\n", "before_files": [{"content": "# coding=utf8\n\"\"\"\nweather.py - Sopel Yahoo! Weather Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nCopyright 2012, Edward Powell, embolalia.net\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom sopel import web\nfrom sopel.module import commands, example, NOLIMIT\n\nimport feedparser\nimport xmltodict\n\n\ndef woeid_search(query):\n \"\"\"\n Find the first Where On Earth ID for the given query. Result is the etree\n node for the result, so that location data can still be retrieved. Returns\n None if there is no result, or the woeid field is empty.\n \"\"\"\n query = 'q=select * from geo.placefinder where text=\"%s\"' % query\n body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(body).get('query')\n results = parsed.get('results')\n if results is None or results.get('Result') is None:\n return None\n if type(results.get('Result')) is list:\n return results.get('Result')[0]\n return results.get('Result')\n\n\ndef get_cover(parsed):\n try:\n condition = parsed.entries[0]['yweather_condition']\n except KeyError:\n return 'unknown'\n text = condition['text']\n # code = int(condition['code'])\n # TODO parse code to get those little icon thingies.\n return text\n\n\ndef get_temp(parsed):\n try:\n condition = parsed.entries[0]['yweather_condition']\n temp = int(condition['temp'])\n except (KeyError, ValueError):\n return 'unknown'\n f = round((temp * 1.8) + 32, 2)\n return (u'%d\\u00B0C (%d\\u00B0F)' % (temp, f))\n\n\ndef get_humidity(parsed):\n try:\n humidity = parsed['feed']['yweather_atmosphere']['humidity']\n except (KeyError, ValueError):\n return 'unknown'\n return \"Humidity: %s%%\" % humidity\n\n\ndef get_wind(parsed):\n try:\n wind_data = parsed['feed']['yweather_wind']\n kph = float(wind_data['speed'])\n m_s = float(round(kph / 3.6, 1))\n speed = int(round(kph / 1.852, 0))\n degrees = int(wind_data['direction'])\n except (KeyError, ValueError):\n return 'unknown'\n\n if speed < 1:\n description = 'Calm'\n elif speed < 4:\n description = 'Light air'\n elif speed < 7:\n description = 'Light breeze'\n elif speed < 11:\n description = 'Gentle breeze'\n elif speed < 16:\n description = 'Moderate breeze'\n elif speed < 22:\n description = 'Fresh breeze'\n elif speed < 28:\n description = 'Strong breeze'\n elif speed < 34:\n description = 'Near gale'\n elif speed < 41:\n description = 'Gale'\n elif speed < 48:\n description = 'Strong gale'\n elif speed < 56:\n description = 'Storm'\n elif speed < 64:\n description = 'Violent storm'\n else:\n description = 'Hurricane'\n\n if (degrees <= 22.5) or (degrees > 337.5):\n degrees = u'\\u2193'\n elif (degrees > 22.5) and (degrees <= 67.5):\n degrees = u'\\u2199'\n elif (degrees > 67.5) and (degrees <= 112.5):\n degrees = u'\\u2190'\n elif (degrees > 112.5) and (degrees <= 157.5):\n degrees = u'\\u2196'\n elif (degrees > 157.5) and (degrees <= 202.5):\n degrees = u'\\u2191'\n elif (degrees > 202.5) and (degrees <= 247.5):\n degrees = u'\\u2197'\n elif (degrees > 247.5) and (degrees <= 292.5):\n degrees = u'\\u2192'\n elif (degrees > 292.5) and (degrees <= 337.5):\n degrees = u'\\u2198'\n\n return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'\n\n\n@commands('weather', 'wea')\n@example('.weather London')\ndef weather(bot, trigger):\n \"\"\".weather location - Show the weather at the given location.\"\"\"\n\n location = trigger.group(2)\n woeid = ''\n if not location:\n woeid = bot.db.get_nick_value(trigger.nick, 'woeid')\n if not woeid:\n return bot.msg(trigger.sender, \"I don't know where you live. \" +\n 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')\n else:\n location = location.strip()\n woeid = bot.db.get_nick_value(location, 'woeid')\n if woeid is None:\n first_result = woeid_search(location)\n if first_result is not None:\n woeid = first_result.get('woeid')\n\n if not woeid:\n return bot.reply(\"I don't know where that is.\")\n\n query = web.urlencode({'w': woeid, 'u': 'c'})\n url = 'http://weather.yahooapis.com/forecastrss?' + query\n parsed = feedparser.parse(url)\n location = parsed['feed']['title']\n\n cover = get_cover(parsed)\n temp = get_temp(parsed)\n humidity = get_humidity(parsed)\n wind = get_wind(parsed)\n bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))\n\n\n@commands('setlocation', 'setwoeid')\n@example('.setlocation Columbus, OH')\ndef update_woeid(bot, trigger):\n \"\"\"Set your default weather location.\"\"\"\n if not trigger.group(2):\n bot.reply('Give me a location, like \"Washington, DC\" or \"London\".')\n return NOLIMIT\n\n first_result = woeid_search(trigger.group(2))\n if first_result is None:\n return bot.reply(\"I don't know where that is.\")\n\n woeid = first_result.get('woeid')\n\n bot.db.set_nick_value(trigger.nick, 'woeid', woeid)\n\n neighborhood = first_result.get('neighborhood').text or ''\n if neighborhood:\n neighborhood += ','\n city = first_result.get('city') or ''\n state = first_result.get('state') or ''\n country = first_result.get('country') or ''\n uzip = first_result.get('uzip') or ''\n bot.reply('I now have you at WOEID %s (%s %s, %s, %s %s.)' %\n (woeid, neighborhood, city, state, country, uzip))\n", "path": "sopel/modules/weather.py"}], "after_files": [{"content": "# coding=utf8\n\"\"\"\nweather.py - Sopel Yahoo! Weather Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nCopyright 2012, Edward Powell, embolalia.net\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals\n\nfrom sopel import web\nfrom sopel.module import commands, example, NOLIMIT\n\nimport xmltodict\n\n\ndef woeid_search(query):\n \"\"\"\n Find the first Where On Earth ID for the given query. Result is the etree\n node for the result, so that location data can still be retrieved. Returns\n None if there is no result, or the woeid field is empty.\n \"\"\"\n query = 'q=select * from geo.placefinder where text=\"%s\"' % query\n body = web.get('http://query.yahooapis.com/v1/public/yql?' + query,\n dont_decode=True)\n parsed = xmltodict.parse(body).get('query')\n results = parsed.get('results')\n if results is None or results.get('Result') is None:\n return None\n if type(results.get('Result')) is list:\n return results.get('Result')[0]\n return results.get('Result')\n\n\ndef get_cover(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n except KeyError:\n return 'unknown'\n text = condition['@text']\n # code = int(condition['code'])\n # TODO parse code to get those little icon thingies.\n return text\n\n\ndef get_temp(parsed):\n try:\n condition = parsed['channel']['item']['yweather:condition']\n temp = int(condition['@temp'])\n except (KeyError, ValueError):\n return 'unknown'\n f = round((temp * 1.8) + 32, 2)\n return (u'%d\\u00B0C (%d\\u00B0F)' % (temp, f))\n\n\ndef get_humidity(parsed):\n try:\n humidity = parsed['channel']['yweather:atmosphere']['@humidity']\n except (KeyError, ValueError):\n return 'unknown'\n return \"Humidity: %s%%\" % humidity\n\n\ndef get_wind(parsed):\n try:\n wind_data = parsed['channel']['yweather:wind']\n kph = float(wind_data['@speed'])\n m_s = float(round(kph / 3.6, 1))\n speed = int(round(kph / 1.852, 0))\n degrees = int(wind_data['@direction'])\n except (KeyError, ValueError):\n return 'unknown'\n\n if speed < 1:\n description = 'Calm'\n elif speed < 4:\n description = 'Light air'\n elif speed < 7:\n description = 'Light breeze'\n elif speed < 11:\n description = 'Gentle breeze'\n elif speed < 16:\n description = 'Moderate breeze'\n elif speed < 22:\n description = 'Fresh breeze'\n elif speed < 28:\n description = 'Strong breeze'\n elif speed < 34:\n description = 'Near gale'\n elif speed < 41:\n description = 'Gale'\n elif speed < 48:\n description = 'Strong gale'\n elif speed < 56:\n description = 'Storm'\n elif speed < 64:\n description = 'Violent storm'\n else:\n description = 'Hurricane'\n\n if (degrees <= 22.5) or (degrees > 337.5):\n degrees = u'\\u2193'\n elif (degrees > 22.5) and (degrees <= 67.5):\n degrees = u'\\u2199'\n elif (degrees > 67.5) and (degrees <= 112.5):\n degrees = u'\\u2190'\n elif (degrees > 112.5) and (degrees <= 157.5):\n degrees = u'\\u2196'\n elif (degrees > 157.5) and (degrees <= 202.5):\n degrees = u'\\u2191'\n elif (degrees > 202.5) and (degrees <= 247.5):\n degrees = u'\\u2197'\n elif (degrees > 247.5) and (degrees <= 292.5):\n degrees = u'\\u2192'\n elif (degrees > 292.5) and (degrees <= 337.5):\n degrees = u'\\u2198'\n\n return description + ' ' + str(m_s) + 'm/s (' + degrees + ')'\n\n\n@commands('weather', 'wea')\n@example('.weather London')\ndef weather(bot, trigger):\n \"\"\".weather location - Show the weather at the given location.\"\"\"\n\n location = trigger.group(2)\n woeid = ''\n if not location:\n woeid = bot.db.get_nick_value(trigger.nick, 'woeid')\n if not woeid:\n return bot.msg(trigger.sender, \"I don't know where you live. \" +\n 'Give me a location, like .weather London, or tell me where you live by saying .setlocation London, for example.')\n else:\n location = location.strip()\n woeid = bot.db.get_nick_value(location, 'woeid')\n if woeid is None:\n first_result = woeid_search(location)\n if first_result is not None:\n woeid = first_result.get('woeid')\n\n if not woeid:\n return bot.reply(\"I don't know where that is.\")\n\n query = web.urlencode({'w': woeid, 'u': 'c'})\n raw = web.get('http://weather.yahooapis.com/forecastrss?' + query, \n dont_decode=True)\n parsed = xmltodict.parse(raw).get('rss')\n location = parsed.get('channel').get('title')\n\n cover = get_cover(parsed)\n temp = get_temp(parsed)\n humidity = get_humidity(parsed)\n wind = get_wind(parsed)\n bot.say(u'%s: %s, %s, %s, %s' % (location, cover, temp, humidity, wind))\n\n\n@commands('setlocation', 'setwoeid')\n@example('.setlocation Columbus, OH')\ndef update_woeid(bot, trigger):\n \"\"\"Set your default weather location.\"\"\"\n if not trigger.group(2):\n bot.reply('Give me a location, like \"Washington, DC\" or \"London\".')\n return NOLIMIT\n\n first_result = woeid_search(trigger.group(2))\n if first_result is None:\n return bot.reply(\"I don't know where that is.\")\n\n woeid = first_result.get('woeid')\n\n bot.db.set_nick_value(trigger.nick, 'woeid', woeid)\n\n neighborhood = first_result.get('neighborhood') or ''\n if neighborhood:\n neighborhood += ','\n city = first_result.get('city') or ''\n state = first_result.get('state') or ''\n country = first_result.get('country') or ''\n uzip = first_result.get('uzip') or ''\n bot.reply('I now have you at WOEID %s (%s %s, %s, %s %s.)' %\n (woeid, neighborhood, city, state, country, uzip))\n", "path": "sopel/modules/weather.py"}]} | 2,387 | 748 |
gh_patches_debug_8616 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1271 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove duplicate docs generation
In `synth.py` we have a `nox` session to generate the docs [here](https://github.com/googleapis/google-api-python-client/blob/master/synth.py#L36). The same python script is running as part of the Github action in #1187, so we should remove the `docs` session from `synth.py` and `noxfile.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1
2 # Copyright 2020 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17
18 import nox
19
20 test_dependencies = [
21 "django>=2.0.0",
22 "google-auth",
23 "google-auth-httplib2",
24 "mox",
25 "parameterized",
26 "pyopenssl",
27 "pytest",
28 "pytest-cov",
29 "webtest",
30 "coverage",
31 "unittest2",
32 "mock",
33 ]
34
35
36 @nox.session(python=["3.7"])
37 def lint(session):
38 session.install("flake8")
39 session.run(
40 "flake8",
41 "googleapiclient",
42 "tests",
43 "--count",
44 "--select=E9,F63,F7,F82",
45 "--show-source",
46 "--statistics",
47 )
48
49
50 @nox.session(python=["3.6", "3.7", "3.8", "3.9"])
51 @nox.parametrize(
52 "oauth2client",
53 [
54 "oauth2client<2dev",
55 "oauth2client>=2,<=3dev",
56 "oauth2client>=3,<=4dev",
57 "oauth2client>=4,<=5dev",
58 ],
59 )
60 def unit(session, oauth2client):
61 session.install(*test_dependencies)
62 session.install(oauth2client)
63 session.install('.')
64
65 # Run py.test against the unit tests.
66 session.run(
67 "py.test",
68 "--quiet",
69 "--cov=googleapiclient",
70 "--cov=tests",
71 "--cov-append",
72 "--cov-config=.coveragerc",
73 "--cov-report=",
74 "--cov-fail-under=85",
75 "tests",
76 *session.posargs,
77 )
78
79
80 @nox.session(python="3.6")
81 def docs(session):
82 session.install('.')
83 session.run("python", "describe.py")
```
Path: `synth.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import synthtool as s
16 from synthtool import gcp
17
18
19 common = gcp.CommonTemplates()
20
21 # ----------------------------------------------------------------------------
22 # Add templated files
23 # ----------------------------------------------------------------------------
24 templated_files = common.py_library()
25
26 # Copy kokoro configs.
27 # Docs are excluded as repo docs cannot currently be generated using sphinx.
28 s.move(templated_files / '.kokoro', excludes=['**/docs/*', 'publish-docs.sh'])
29
30 # Also move issue templates
31 s.move(templated_files / '.github')
32
33 # ----------------------------------------------------------------------------
34 # Generate docs
35 # ----------------------------------------------------------------------------
36 s.shell.run(["nox", "-s", "docs"], hide_output=False)
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -75,9 +75,3 @@
"tests",
*session.posargs,
)
-
-
[email protected](python="3.6")
-def docs(session):
- session.install('.')
- session.run("python", "describe.py")
\ No newline at end of file
diff --git a/synth.py b/synth.py
--- a/synth.py
+++ b/synth.py
@@ -29,8 +29,3 @@
# Also move issue templates
s.move(templated_files / '.github')
-
-# ----------------------------------------------------------------------------
-# Generate docs
-# ----------------------------------------------------------------------------
-s.shell.run(["nox", "-s", "docs"], hide_output=False)
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -75,9 +75,3 @@\n \"tests\",\n *session.posargs,\n )\n-\n-\[email protected](python=\"3.6\")\n-def docs(session):\n- session.install('.')\n- session.run(\"python\", \"describe.py\")\n\\ No newline at end of file\ndiff --git a/synth.py b/synth.py\n--- a/synth.py\n+++ b/synth.py\n@@ -29,8 +29,3 @@\n \n # Also move issue templates\n s.move(templated_files / '.github')\n-\n-# ----------------------------------------------------------------------------\n-# Generate docs\n-# ----------------------------------------------------------------------------\n-s.shell.run([\"nox\", \"-s\", \"docs\"], hide_output=False)\n", "issue": "Remove duplicate docs generation\nIn `synth.py` we have a `nox` session to generate the docs [here](https://github.com/googleapis/google-api-python-client/blob/master/synth.py#L36). The same python script is running as part of the Github action in #1187, so we should remove the `docs` session from `synth.py` and `noxfile.py`.\n", "before_files": [{"content": "\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\nimport nox\n\ntest_dependencies = [\n \"django>=2.0.0\",\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n \"parameterized\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"webtest\",\n \"coverage\",\n \"unittest2\",\n \"mock\",\n]\n\n\[email protected](python=[\"3.7\"])\ndef lint(session):\n session.install(\"flake8\")\n session.run(\n \"flake8\",\n \"googleapiclient\",\n \"tests\",\n \"--count\",\n \"--select=E9,F63,F7,F82\",\n \"--show-source\",\n \"--statistics\",\n )\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\"])\[email protected](\n \"oauth2client\",\n [\n \"oauth2client<2dev\",\n \"oauth2client>=2,<=3dev\",\n \"oauth2client>=3,<=4dev\",\n \"oauth2client>=4,<=5dev\",\n ],\n)\ndef unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n session.install('.')\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=googleapiclient\",\n \"--cov=tests\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=85\",\n \"tests\",\n *session.posargs,\n )\n\n\[email protected](python=\"3.6\")\ndef docs(session):\n session.install('.')\n session.run(\"python\", \"describe.py\")", "path": "noxfile.py"}, {"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport synthtool as s\nfrom synthtool import gcp\n\n\ncommon = gcp.CommonTemplates()\n\n# ----------------------------------------------------------------------------\n# Add templated files\n# ----------------------------------------------------------------------------\ntemplated_files = common.py_library()\n\n# Copy kokoro configs.\n# Docs are excluded as repo docs cannot currently be generated using sphinx.\ns.move(templated_files / '.kokoro', excludes=['**/docs/*', 'publish-docs.sh'])\n\n# Also move issue templates\ns.move(templated_files / '.github')\n\n# ----------------------------------------------------------------------------\n# Generate docs\n# ----------------------------------------------------------------------------\ns.shell.run([\"nox\", \"-s\", \"docs\"], hide_output=False)\n", "path": "synth.py"}], "after_files": [{"content": "\n# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\n\nimport nox\n\ntest_dependencies = [\n \"django>=2.0.0\",\n \"google-auth\",\n \"google-auth-httplib2\",\n \"mox\",\n \"parameterized\",\n \"pyopenssl\",\n \"pytest\",\n \"pytest-cov\",\n \"webtest\",\n \"coverage\",\n \"unittest2\",\n \"mock\",\n]\n\n\[email protected](python=[\"3.7\"])\ndef lint(session):\n session.install(\"flake8\")\n session.run(\n \"flake8\",\n \"googleapiclient\",\n \"tests\",\n \"--count\",\n \"--select=E9,F63,F7,F82\",\n \"--show-source\",\n \"--statistics\",\n )\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\"])\[email protected](\n \"oauth2client\",\n [\n \"oauth2client<2dev\",\n \"oauth2client>=2,<=3dev\",\n \"oauth2client>=3,<=4dev\",\n \"oauth2client>=4,<=5dev\",\n ],\n)\ndef unit(session, oauth2client):\n session.install(*test_dependencies)\n session.install(oauth2client)\n session.install('.')\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=googleapiclient\",\n \"--cov=tests\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=85\",\n \"tests\",\n *session.posargs,\n )\n", "path": "noxfile.py"}, {"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport synthtool as s\nfrom synthtool import gcp\n\n\ncommon = gcp.CommonTemplates()\n\n# ----------------------------------------------------------------------------\n# Add templated files\n# ----------------------------------------------------------------------------\ntemplated_files = common.py_library()\n\n# Copy kokoro configs.\n# Docs are excluded as repo docs cannot currently be generated using sphinx.\ns.move(templated_files / '.kokoro', excludes=['**/docs/*', 'publish-docs.sh'])\n\n# Also move issue templates\ns.move(templated_files / '.github')\n", "path": "synth.py"}]} | 1,347 | 174 |
gh_patches_debug_9906 | rasdani/github-patches | git_diff | freedomofpress__securedrop-4927 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[1.1.0-rc4] "Unable to create virtualenv. Check network settings and try again"
(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)
As expected, running `securedrop-admin` commands triggered the "run setup" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:
"Unable to create virtualenv. Check network settings and try again"
Tor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.
[1.1.0-rc4] "Unable to create virtualenv. Check network settings and try again"
(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)
As expected, running `securedrop-admin` commands triggered the "run setup" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:
"Unable to create virtualenv. Check network settings and try again"
Tor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `admin/bootstrap.py`
Content:
```
1 # -*- mode: python; coding: utf-8 -*-
2 #
3 # Copyright (C) 2013-2018 Freedom of the Press Foundation & al
4 # Copyright (C) 2018 Loic Dachary <[email protected]>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19
20 import argparse
21 import logging
22 import os
23 import shutil
24 import subprocess
25 import sys
26
27 sdlog = logging.getLogger(__name__)
28
29 DIR = os.path.dirname(os.path.realpath(__file__))
30 VENV_DIR = os.path.join(DIR, ".venv3")
31
32
33 def setup_logger(verbose=False):
34 """ Configure logging handler """
35 # Set default level on parent
36 sdlog.setLevel(logging.DEBUG)
37 level = logging.DEBUG if verbose else logging.INFO
38
39 stdout = logging.StreamHandler(sys.stdout)
40 stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
41 stdout.setLevel(level)
42 sdlog.addHandler(stdout)
43
44
45 def run_command(command):
46 """
47 Wrapper function to display stdout for running command,
48 similar to how shelling out in a Bash script displays rolling output.
49
50 Yields a list of the stdout from the `command`, and raises a
51 CalledProcessError if `command` returns non-zero.
52 """
53 popen = subprocess.Popen(command,
54 stdout=subprocess.PIPE,
55 stderr=subprocess.STDOUT)
56 for stdout_line in iter(popen.stdout.readline, b""):
57 yield stdout_line
58 popen.stdout.close()
59 return_code = popen.wait()
60 if return_code:
61 raise subprocess.CalledProcessError(return_code, command)
62
63
64 def is_tails():
65 try:
66 id = subprocess.check_output('lsb_release --id --short',
67 shell=True).strip()
68 except subprocess.CalledProcessError:
69 id = None
70
71 # dirty hack to unreliably detect Tails 4.0~beta2
72 if id == b'Debian':
73 if os.uname()[1] == 'amnesia':
74 id = 'Tails'
75
76 return id == 'Tails'
77
78
79 def clean_up_tails3_venv(virtualenv_dir=VENV_DIR):
80 """
81 Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is
82 based on Debian Buster and uses libpython3.7. This means that the Tails 3.x
83 virtualenv will not work under Tails 4.x, and will need to be destroyed and
84 rebuilt. We can detect if the version of libpython is 3.5 in the
85 admin/.venv3/ folder, and delete it if that's the case. This will ensure a
86 smooth upgrade from Tails 3.x to Tails 4.x.
87 """
88 if is_tails():
89 try:
90 dist = subprocess.check_output('lsb_release --codename --short',
91 shell=True).strip()
92 except subprocess.CalledProcessError:
93 dist = None
94
95 # tails4 is based on buster
96 if dist == b'buster':
97 python_lib_path = os.path.join(virtualenv_dir, "lib/python3.5")
98 if os.path.exists(os.path.join(python_lib_path)):
99 sdlog.info(
100 "Tails 3 Python 3 virtualenv detected. "
101 "Removing it."
102 )
103 shutil.rmtree(virtualenv_dir)
104 sdlog.info("Tails 3 Python 3 virtualenv deleted.")
105
106
107 def checkenv(args):
108 clean_up_tails3_venv(VENV_DIR)
109 if not os.path.exists(os.path.join(VENV_DIR, "bin/activate")):
110 sdlog.error('Please run "securedrop-admin setup".')
111 sys.exit(1)
112
113
114 def maybe_torify():
115 if is_tails():
116 return ['torify']
117 else:
118 return []
119
120
121 def install_apt_dependencies(args):
122 """
123 Install apt dependencies in Tails. In order to install Ansible in
124 a virtualenv, first there are a number of Python prerequisites.
125 """
126 sdlog.info("Installing SecureDrop Admin dependencies")
127 sdlog.info(("You'll be prompted for the temporary Tails admin password,"
128 " which was set on Tails login screen"))
129
130 apt_command = ['sudo', 'su', '-c',
131 "apt-get update && \
132 apt-get -q -o=Dpkg::Use-Pty=0 install -y \
133 python3-virtualenv \
134 python3-yaml \
135 python3-pip \
136 ccontrol \
137 virtualenv \
138 libffi-dev \
139 libssl-dev \
140 libpython3-dev",
141 ]
142
143 try:
144 # Print command results in real-time, to keep Admin apprised
145 # of progress during long-running command.
146 for output_line in run_command(apt_command):
147 print(output_line.decode('utf-8').rstrip())
148 except subprocess.CalledProcessError:
149 # Tails supports apt persistence, which was used by SecureDrop
150 # under Tails 2.x. If updates are being applied, don't try to pile
151 # on with more apt requests.
152 sdlog.error(("Failed to install apt dependencies. Check network"
153 " connection and try again."))
154 raise
155
156
157 def envsetup(args):
158 """Installs Admin tooling required for managing SecureDrop. Specifically:
159
160 * updates apt-cache
161 * installs apt packages for Python virtualenv
162 * creates virtualenv
163 * installs pip packages inside virtualenv
164
165 The virtualenv is created within the Persistence volume in Tails, so that
166 Ansible is available to the Admin on subsequent boots without requiring
167 installation of packages again.
168 """
169 # clean up tails 3.x venv when migrating to tails 4.x
170 clean_up_tails3_venv(VENV_DIR)
171
172 # virtualenv doesnt exist? Install dependencies and create
173 if not os.path.exists(VENV_DIR):
174
175 install_apt_dependencies(args)
176
177 # Technically you can create a virtualenv from within python
178 # but pip can only be run over tor on tails, and debugging that
179 # along with instaling a third-party dependency is not worth
180 # the effort here.
181 sdlog.info("Setting up virtualenv")
182 try:
183 sdlog.debug(subprocess.check_output(
184 maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],
185 stderr=subprocess.STDOUT))
186 except subprocess.CalledProcessError as e:
187 sdlog.debug(e.output)
188 sdlog.error(("Unable to create virtualenv. Check network settings"
189 " and try again."))
190 raise
191 else:
192 sdlog.info("Virtualenv already exists, not creating")
193
194 install_pip_dependencies(args)
195 if os.path.exists(os.path.join(DIR, 'setup.py')):
196 install_pip_self(args)
197
198 sdlog.info("Finished installing SecureDrop dependencies")
199
200
201 def install_pip_self(args):
202 pip_install_cmd = [
203 os.path.join(VENV_DIR, 'bin', 'pip3'),
204 'install', '-e', DIR
205 ]
206 try:
207 subprocess.check_output(maybe_torify() + pip_install_cmd,
208 stderr=subprocess.STDOUT)
209 except subprocess.CalledProcessError as e:
210 sdlog.debug(e.output)
211 sdlog.error("Unable to install self, run with -v for more information")
212 raise
213
214
215 def install_pip_dependencies(args, pip_install_cmd=[
216 os.path.join(VENV_DIR, 'bin', 'pip3'),
217 'install',
218 # Specify requirements file.
219 '-r', os.path.join(DIR, 'requirements.txt'),
220 '--require-hashes',
221 # Make sure to upgrade packages only if necessary.
222 '-U', '--upgrade-strategy', 'only-if-needed',
223 ]):
224 """
225 Install Python dependencies via pip into virtualenv.
226 """
227
228 sdlog.info("Checking Python dependencies for securedrop-admin")
229 try:
230 pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,
231 stderr=subprocess.STDOUT)
232 except subprocess.CalledProcessError as e:
233 sdlog.debug(e.output)
234 sdlog.error(("Failed to install pip dependencies. Check network"
235 " connection and try again."))
236 raise
237
238 sdlog.debug(pip_output)
239 if "Successfully installed" in str(pip_output):
240 sdlog.info("Python dependencies for securedrop-admin upgraded")
241 else:
242 sdlog.info("Python dependencies for securedrop-admin are up-to-date")
243
244
245 def parse_argv(argv):
246 parser = argparse.ArgumentParser()
247 parser.add_argument('-v', action='store_true', default=False,
248 help="Increase verbosity on output")
249 parser.set_defaults(func=envsetup)
250
251 subparsers = parser.add_subparsers()
252
253 envsetup_parser = subparsers.add_parser(
254 'envsetup',
255 help='Set up the admin virtualenv.'
256 )
257 envsetup_parser.set_defaults(func=envsetup)
258
259 checkenv_parser = subparsers.add_parser(
260 'checkenv',
261 help='Check that the admin virtualenv is properly set up.'
262 )
263 checkenv_parser.set_defaults(func=checkenv)
264
265 return parser.parse_args(argv)
266
267
268 if __name__ == "__main__":
269 args = parse_argv(sys.argv[1:])
270 setup_logger(args.v)
271
272 try:
273 args.func(args)
274 except Exception:
275 sys.exit(1)
276 else:
277 sys.exit(0)
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/admin/bootstrap.py b/admin/bootstrap.py
--- a/admin/bootstrap.py
+++ b/admin/bootstrap.py
@@ -64,12 +64,12 @@
def is_tails():
try:
id = subprocess.check_output('lsb_release --id --short',
- shell=True).strip()
+ shell=True).decode('utf-8').strip()
except subprocess.CalledProcessError:
id = None
# dirty hack to unreliably detect Tails 4.0~beta2
- if id == b'Debian':
+ if id == 'Debian':
if os.uname()[1] == 'amnesia':
id = 'Tails'
| {"golden_diff": "diff --git a/admin/bootstrap.py b/admin/bootstrap.py\n--- a/admin/bootstrap.py\n+++ b/admin/bootstrap.py\n@@ -64,12 +64,12 @@\n def is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n- shell=True).strip()\n+ shell=True).decode('utf-8').strip()\n except subprocess.CalledProcessError:\n id = None\n \n # dirty hack to unreliably detect Tails 4.0~beta2\n- if id == b'Debian':\n+ if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n", "issue": "[1.1.0-rc4] \"Unable to create virtualenv. Check network settings and try again\"\n(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)\r\n\r\nAs expected, running `securedrop-admin` commands triggered the \"run setup\" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:\r\n\r\n\"Unable to create virtualenv. Check network settings and try again\"\r\n\r\nTor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.\n[1.1.0-rc4] \"Unable to create virtualenv. Check network settings and try again\"\n(Tested on a Tails 3.16 Admin Workstation by checking out 1.1.0-rc4 tag, without updating my servers.)\r\n\r\nAs expected, running `securedrop-admin` commands triggered the \"run setup\" step. However, the `securedrop-admin setup` step itself did not complete successfully; it went pretty far along but finally failed with this error:\r\n\r\n\"Unable to create virtualenv. Check network settings and try again\"\r\n\r\nTor seems to be working fine. Possibly intermittent issues but good to warn users about and have mitigation instructions if it is likely to arise during updates.\n", "before_files": [{"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose=False):\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command):\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n id = None\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == b'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef clean_up_tails3_venv(virtualenv_dir=VENV_DIR):\n \"\"\"\n Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n virtualenv will not work under Tails 4.x, and will need to be destroyed and\n rebuilt. We can detect if the version of libpython is 3.5 in the\n admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n smooth upgrade from Tails 3.x to Tails 4.x.\n \"\"\"\n if is_tails():\n try:\n dist = subprocess.check_output('lsb_release --codename --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n dist = None\n\n # tails4 is based on buster\n if dist == b'buster':\n python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n if os.path.exists(os.path.join(python_lib_path)):\n sdlog.info(\n \"Tails 3 Python 3 virtualenv detected. \"\n \"Removing it.\"\n )\n shutil.rmtree(virtualenv_dir)\n sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n\n\ndef checkenv(args):\n clean_up_tails3_venv(VENV_DIR)\n if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n sdlog.error('Please run \"securedrop-admin setup\".')\n sys.exit(1)\n\n\ndef maybe_torify():\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args):\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n ccontrol \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args):\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # clean up tails 3.x venv when migrating to tails 4.x\n clean_up_tails3_venv(VENV_DIR)\n\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(VENV_DIR):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over tor on tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n install_pip_dependencies(args)\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args):\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(args, pip_install_cmd=[\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n # Specify requirements file.\n '-r', os.path.join(DIR, 'requirements.txt'),\n '--require-hashes',\n # Make sure to upgrade packages only if necessary.\n '-U', '--upgrade-strategy', 'only-if-needed',\n]):\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n\n sdlog.info(\"Checking Python dependencies for securedrop-admin\")\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install pip dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"Python dependencies for securedrop-admin upgraded\")\n else:\n sdlog.info(\"Python dependencies for securedrop-admin are up-to-date\")\n\n\ndef parse_argv(argv):\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.set_defaults(func=envsetup)\n\n subparsers = parser.add_subparsers()\n\n envsetup_parser = subparsers.add_parser(\n 'envsetup',\n help='Set up the admin virtualenv.'\n )\n envsetup_parser.set_defaults(func=envsetup)\n\n checkenv_parser = subparsers.add_parser(\n 'checkenv',\n help='Check that the admin virtualenv is properly set up.'\n )\n checkenv_parser.set_defaults(func=checkenv)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py"}], "after_files": [{"content": "# -*- mode: python; coding: utf-8 -*-\n#\n# Copyright (C) 2013-2018 Freedom of the Press Foundation & al\n# Copyright (C) 2018 Loic Dachary <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\nimport argparse\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nsdlog = logging.getLogger(__name__)\n\nDIR = os.path.dirname(os.path.realpath(__file__))\nVENV_DIR = os.path.join(DIR, \".venv3\")\n\n\ndef setup_logger(verbose=False):\n \"\"\" Configure logging handler \"\"\"\n # Set default level on parent\n sdlog.setLevel(logging.DEBUG)\n level = logging.DEBUG if verbose else logging.INFO\n\n stdout = logging.StreamHandler(sys.stdout)\n stdout.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))\n stdout.setLevel(level)\n sdlog.addHandler(stdout)\n\n\ndef run_command(command):\n \"\"\"\n Wrapper function to display stdout for running command,\n similar to how shelling out in a Bash script displays rolling output.\n\n Yields a list of the stdout from the `command`, and raises a\n CalledProcessError if `command` returns non-zero.\n \"\"\"\n popen = subprocess.Popen(command,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n for stdout_line in iter(popen.stdout.readline, b\"\"):\n yield stdout_line\n popen.stdout.close()\n return_code = popen.wait()\n if return_code:\n raise subprocess.CalledProcessError(return_code, command)\n\n\ndef is_tails():\n try:\n id = subprocess.check_output('lsb_release --id --short',\n shell=True).decode('utf-8').strip()\n except subprocess.CalledProcessError:\n id = None\n\n # dirty hack to unreliably detect Tails 4.0~beta2\n if id == 'Debian':\n if os.uname()[1] == 'amnesia':\n id = 'Tails'\n\n return id == 'Tails'\n\n\ndef clean_up_tails3_venv(virtualenv_dir=VENV_DIR):\n \"\"\"\n Tails 3.x, based on debian stretch uses libpython3.5, whereas Tails 4.x is\n based on Debian Buster and uses libpython3.7. This means that the Tails 3.x\n virtualenv will not work under Tails 4.x, and will need to be destroyed and\n rebuilt. We can detect if the version of libpython is 3.5 in the\n admin/.venv3/ folder, and delete it if that's the case. This will ensure a\n smooth upgrade from Tails 3.x to Tails 4.x.\n \"\"\"\n if is_tails():\n try:\n dist = subprocess.check_output('lsb_release --codename --short',\n shell=True).strip()\n except subprocess.CalledProcessError:\n dist = None\n\n # tails4 is based on buster\n if dist == b'buster':\n python_lib_path = os.path.join(virtualenv_dir, \"lib/python3.5\")\n if os.path.exists(os.path.join(python_lib_path)):\n sdlog.info(\n \"Tails 3 Python 3 virtualenv detected. \"\n \"Removing it.\"\n )\n shutil.rmtree(virtualenv_dir)\n sdlog.info(\"Tails 3 Python 3 virtualenv deleted.\")\n\n\ndef checkenv(args):\n clean_up_tails3_venv(VENV_DIR)\n if not os.path.exists(os.path.join(VENV_DIR, \"bin/activate\")):\n sdlog.error('Please run \"securedrop-admin setup\".')\n sys.exit(1)\n\n\ndef maybe_torify():\n if is_tails():\n return ['torify']\n else:\n return []\n\n\ndef install_apt_dependencies(args):\n \"\"\"\n Install apt dependencies in Tails. In order to install Ansible in\n a virtualenv, first there are a number of Python prerequisites.\n \"\"\"\n sdlog.info(\"Installing SecureDrop Admin dependencies\")\n sdlog.info((\"You'll be prompted for the temporary Tails admin password,\"\n \" which was set on Tails login screen\"))\n\n apt_command = ['sudo', 'su', '-c',\n \"apt-get update && \\\n apt-get -q -o=Dpkg::Use-Pty=0 install -y \\\n python3-virtualenv \\\n python3-yaml \\\n python3-pip \\\n ccontrol \\\n virtualenv \\\n libffi-dev \\\n libssl-dev \\\n libpython3-dev\",\n ]\n\n try:\n # Print command results in real-time, to keep Admin apprised\n # of progress during long-running command.\n for output_line in run_command(apt_command):\n print(output_line.decode('utf-8').rstrip())\n except subprocess.CalledProcessError:\n # Tails supports apt persistence, which was used by SecureDrop\n # under Tails 2.x. If updates are being applied, don't try to pile\n # on with more apt requests.\n sdlog.error((\"Failed to install apt dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n\ndef envsetup(args):\n \"\"\"Installs Admin tooling required for managing SecureDrop. Specifically:\n\n * updates apt-cache\n * installs apt packages for Python virtualenv\n * creates virtualenv\n * installs pip packages inside virtualenv\n\n The virtualenv is created within the Persistence volume in Tails, so that\n Ansible is available to the Admin on subsequent boots without requiring\n installation of packages again.\n \"\"\"\n # clean up tails 3.x venv when migrating to tails 4.x\n clean_up_tails3_venv(VENV_DIR)\n\n # virtualenv doesnt exist? Install dependencies and create\n if not os.path.exists(VENV_DIR):\n\n install_apt_dependencies(args)\n\n # Technically you can create a virtualenv from within python\n # but pip can only be run over tor on tails, and debugging that\n # along with instaling a third-party dependency is not worth\n # the effort here.\n sdlog.info(\"Setting up virtualenv\")\n try:\n sdlog.debug(subprocess.check_output(\n maybe_torify() + ['virtualenv', '--python=python3', VENV_DIR],\n stderr=subprocess.STDOUT))\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Unable to create virtualenv. Check network settings\"\n \" and try again.\"))\n raise\n else:\n sdlog.info(\"Virtualenv already exists, not creating\")\n\n install_pip_dependencies(args)\n if os.path.exists(os.path.join(DIR, 'setup.py')):\n install_pip_self(args)\n\n sdlog.info(\"Finished installing SecureDrop dependencies\")\n\n\ndef install_pip_self(args):\n pip_install_cmd = [\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install', '-e', DIR\n ]\n try:\n subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error(\"Unable to install self, run with -v for more information\")\n raise\n\n\ndef install_pip_dependencies(args, pip_install_cmd=[\n os.path.join(VENV_DIR, 'bin', 'pip3'),\n 'install',\n # Specify requirements file.\n '-r', os.path.join(DIR, 'requirements.txt'),\n '--require-hashes',\n # Make sure to upgrade packages only if necessary.\n '-U', '--upgrade-strategy', 'only-if-needed',\n]):\n \"\"\"\n Install Python dependencies via pip into virtualenv.\n \"\"\"\n\n sdlog.info(\"Checking Python dependencies for securedrop-admin\")\n try:\n pip_output = subprocess.check_output(maybe_torify() + pip_install_cmd,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n sdlog.debug(e.output)\n sdlog.error((\"Failed to install pip dependencies. Check network\"\n \" connection and try again.\"))\n raise\n\n sdlog.debug(pip_output)\n if \"Successfully installed\" in str(pip_output):\n sdlog.info(\"Python dependencies for securedrop-admin upgraded\")\n else:\n sdlog.info(\"Python dependencies for securedrop-admin are up-to-date\")\n\n\ndef parse_argv(argv):\n parser = argparse.ArgumentParser()\n parser.add_argument('-v', action='store_true', default=False,\n help=\"Increase verbosity on output\")\n parser.set_defaults(func=envsetup)\n\n subparsers = parser.add_subparsers()\n\n envsetup_parser = subparsers.add_parser(\n 'envsetup',\n help='Set up the admin virtualenv.'\n )\n envsetup_parser.set_defaults(func=envsetup)\n\n checkenv_parser = subparsers.add_parser(\n 'checkenv',\n help='Check that the admin virtualenv is properly set up.'\n )\n checkenv_parser.set_defaults(func=checkenv)\n\n return parser.parse_args(argv)\n\n\nif __name__ == \"__main__\":\n args = parse_argv(sys.argv[1:])\n setup_logger(args.v)\n\n try:\n args.func(args)\n except Exception:\n sys.exit(1)\n else:\n sys.exit(0)\n", "path": "admin/bootstrap.py"}]} | 3,431 | 152 |
gh_patches_debug_18679 | rasdani/github-patches | git_diff | python-discord__bot-1108 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reddit cog does not escape markdown in post titles
Discord markdown in reddit post titles is left unhandled and can sometimes break the links:

For the basic markdown passing it through d.py's `escape_markdown` should work, but from a quick look I haven't found a way to escape brackets in post titles, which breaks the text links. A replacement with similar unicode chars is an option
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/reddit.py`
Content:
```
1 import asyncio
2 import logging
3 import random
4 import textwrap
5 from collections import namedtuple
6 from datetime import datetime, timedelta
7 from typing import List
8
9 from aiohttp import BasicAuth, ClientError
10 from discord import Colour, Embed, TextChannel
11 from discord.ext.commands import Cog, Context, group
12 from discord.ext.tasks import loop
13
14 from bot.bot import Bot
15 from bot.constants import Channels, ERROR_REPLIES, Emojis, Reddit as RedditConfig, STAFF_ROLES, Webhooks
16 from bot.converters import Subreddit
17 from bot.decorators import with_role
18 from bot.pagination import LinePaginator
19 from bot.utils.messages import sub_clyde
20
21 log = logging.getLogger(__name__)
22
23 AccessToken = namedtuple("AccessToken", ["token", "expires_at"])
24
25
26 class Reddit(Cog):
27 """Track subreddit posts and show detailed statistics about them."""
28
29 HEADERS = {"User-Agent": "python3:python-discord/bot:1.0.0 (by /u/PythonDiscord)"}
30 URL = "https://www.reddit.com"
31 OAUTH_URL = "https://oauth.reddit.com"
32 MAX_RETRIES = 3
33
34 def __init__(self, bot: Bot):
35 self.bot = bot
36
37 self.webhook = None
38 self.access_token = None
39 self.client_auth = BasicAuth(RedditConfig.client_id, RedditConfig.secret)
40
41 bot.loop.create_task(self.init_reddit_ready())
42 self.auto_poster_loop.start()
43
44 def cog_unload(self) -> None:
45 """Stop the loop task and revoke the access token when the cog is unloaded."""
46 self.auto_poster_loop.cancel()
47 if self.access_token and self.access_token.expires_at > datetime.utcnow():
48 asyncio.create_task(self.revoke_access_token())
49
50 async def init_reddit_ready(self) -> None:
51 """Sets the reddit webhook when the cog is loaded."""
52 await self.bot.wait_until_guild_available()
53 if not self.webhook:
54 self.webhook = await self.bot.fetch_webhook(Webhooks.reddit)
55
56 @property
57 def channel(self) -> TextChannel:
58 """Get the #reddit channel object from the bot's cache."""
59 return self.bot.get_channel(Channels.reddit)
60
61 async def get_access_token(self) -> None:
62 """
63 Get a Reddit API OAuth2 access token and assign it to self.access_token.
64
65 A token is valid for 1 hour. There will be MAX_RETRIES to get a token, after which the cog
66 will be unloaded and a ClientError raised if retrieval was still unsuccessful.
67 """
68 for i in range(1, self.MAX_RETRIES + 1):
69 response = await self.bot.http_session.post(
70 url=f"{self.URL}/api/v1/access_token",
71 headers=self.HEADERS,
72 auth=self.client_auth,
73 data={
74 "grant_type": "client_credentials",
75 "duration": "temporary"
76 }
77 )
78
79 if response.status == 200 and response.content_type == "application/json":
80 content = await response.json()
81 expiration = int(content["expires_in"]) - 60 # Subtract 1 minute for leeway.
82 self.access_token = AccessToken(
83 token=content["access_token"],
84 expires_at=datetime.utcnow() + timedelta(seconds=expiration)
85 )
86
87 log.debug(f"New token acquired; expires on UTC {self.access_token.expires_at}")
88 return
89 else:
90 log.debug(
91 f"Failed to get an access token: "
92 f"status {response.status} & content type {response.content_type}; "
93 f"retrying ({i}/{self.MAX_RETRIES})"
94 )
95
96 await asyncio.sleep(3)
97
98 self.bot.remove_cog(self.qualified_name)
99 raise ClientError("Authentication with the Reddit API failed. Unloading the cog.")
100
101 async def revoke_access_token(self) -> None:
102 """
103 Revoke the OAuth2 access token for the Reddit API.
104
105 For security reasons, it's good practice to revoke the token when it's no longer being used.
106 """
107 response = await self.bot.http_session.post(
108 url=f"{self.URL}/api/v1/revoke_token",
109 headers=self.HEADERS,
110 auth=self.client_auth,
111 data={
112 "token": self.access_token.token,
113 "token_type_hint": "access_token"
114 }
115 )
116
117 if response.status == 204 and response.content_type == "application/json":
118 self.access_token = None
119 else:
120 log.warning(f"Unable to revoke access token: status {response.status}.")
121
122 async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:
123 """A helper method to fetch a certain amount of Reddit posts at a given route."""
124 # Reddit's JSON responses only provide 25 posts at most.
125 if not 25 >= amount > 0:
126 raise ValueError("Invalid amount of subreddit posts requested.")
127
128 # Renew the token if necessary.
129 if not self.access_token or self.access_token.expires_at < datetime.utcnow():
130 await self.get_access_token()
131
132 url = f"{self.OAUTH_URL}/{route}"
133 for _ in range(self.MAX_RETRIES):
134 response = await self.bot.http_session.get(
135 url=url,
136 headers={**self.HEADERS, "Authorization": f"bearer {self.access_token.token}"},
137 params=params
138 )
139 if response.status == 200 and response.content_type == 'application/json':
140 # Got appropriate response - process and return.
141 content = await response.json()
142 posts = content["data"]["children"]
143 return posts[:amount]
144
145 await asyncio.sleep(3)
146
147 log.debug(f"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}")
148 return list() # Failed to get appropriate response within allowed number of retries.
149
150 async def get_top_posts(self, subreddit: Subreddit, time: str = "all", amount: int = 5) -> Embed:
151 """
152 Get the top amount of posts for a given subreddit within a specified timeframe.
153
154 A time of "all" will get posts from all time, "day" will get top daily posts and "week" will get the top
155 weekly posts.
156
157 The amount should be between 0 and 25 as Reddit's JSON requests only provide 25 posts at most.
158 """
159 embed = Embed(description="")
160
161 posts = await self.fetch_posts(
162 route=f"{subreddit}/top",
163 amount=amount,
164 params={"t": time}
165 )
166
167 if not posts:
168 embed.title = random.choice(ERROR_REPLIES)
169 embed.colour = Colour.red()
170 embed.description = (
171 "Sorry! We couldn't find any posts from that subreddit. "
172 "If this problem persists, please let us know."
173 )
174
175 return embed
176
177 for post in posts:
178 data = post["data"]
179
180 text = data["selftext"]
181 if text:
182 text = textwrap.shorten(text, width=128, placeholder="...")
183 text += "\n" # Add newline to separate embed info
184
185 ups = data["ups"]
186 comments = data["num_comments"]
187 author = data["author"]
188
189 title = textwrap.shorten(data["title"], width=64, placeholder="...")
190 link = self.URL + data["permalink"]
191
192 embed.description += (
193 f"**[{title}]({link})**\n"
194 f"{text}"
195 f"{Emojis.upvotes} {ups} {Emojis.comments} {comments} {Emojis.user} {author}\n\n"
196 )
197
198 embed.colour = Colour.blurple()
199 return embed
200
201 @loop()
202 async def auto_poster_loop(self) -> None:
203 """Post the top 5 posts daily, and the top 5 posts weekly."""
204 # once we upgrade to d.py 1.3 this can be removed and the loop can use the `time=datetime.time.min` parameter
205 now = datetime.utcnow()
206 tomorrow = now + timedelta(days=1)
207 midnight_tomorrow = tomorrow.replace(hour=0, minute=0, second=0)
208 seconds_until = (midnight_tomorrow - now).total_seconds()
209
210 await asyncio.sleep(seconds_until)
211
212 await self.bot.wait_until_guild_available()
213 if not self.webhook:
214 await self.bot.fetch_webhook(Webhooks.reddit)
215
216 if datetime.utcnow().weekday() == 0:
217 await self.top_weekly_posts()
218 # if it's a monday send the top weekly posts
219
220 for subreddit in RedditConfig.subreddits:
221 top_posts = await self.get_top_posts(subreddit=subreddit, time="day")
222 username = sub_clyde(f"{subreddit} Top Daily Posts")
223 message = await self.webhook.send(username=username, embed=top_posts, wait=True)
224
225 if message.channel.is_news():
226 await message.publish()
227
228 async def top_weekly_posts(self) -> None:
229 """Post a summary of the top posts."""
230 for subreddit in RedditConfig.subreddits:
231 # Send and pin the new weekly posts.
232 top_posts = await self.get_top_posts(subreddit=subreddit, time="week")
233 username = sub_clyde(f"{subreddit} Top Weekly Posts")
234 message = await self.webhook.send(wait=True, username=username, embed=top_posts)
235
236 if subreddit.lower() == "r/python":
237 if not self.channel:
238 log.warning("Failed to get #reddit channel to remove pins in the weekly loop.")
239 return
240
241 # Remove the oldest pins so that only 12 remain at most.
242 pins = await self.channel.pins()
243
244 while len(pins) >= 12:
245 await pins[-1].unpin()
246 del pins[-1]
247
248 await message.pin()
249
250 if message.channel.is_news():
251 await message.publish()
252
253 @group(name="reddit", invoke_without_command=True)
254 async def reddit_group(self, ctx: Context) -> None:
255 """View the top posts from various subreddits."""
256 await ctx.send_help(ctx.command)
257
258 @reddit_group.command(name="top")
259 async def top_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
260 """Send the top posts of all time from a given subreddit."""
261 async with ctx.typing():
262 embed = await self.get_top_posts(subreddit=subreddit, time="all")
263
264 await ctx.send(content=f"Here are the top {subreddit} posts of all time!", embed=embed)
265
266 @reddit_group.command(name="daily")
267 async def daily_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
268 """Send the top posts of today from a given subreddit."""
269 async with ctx.typing():
270 embed = await self.get_top_posts(subreddit=subreddit, time="day")
271
272 await ctx.send(content=f"Here are today's top {subreddit} posts!", embed=embed)
273
274 @reddit_group.command(name="weekly")
275 async def weekly_command(self, ctx: Context, subreddit: Subreddit = "r/Python") -> None:
276 """Send the top posts of this week from a given subreddit."""
277 async with ctx.typing():
278 embed = await self.get_top_posts(subreddit=subreddit, time="week")
279
280 await ctx.send(content=f"Here are this week's top {subreddit} posts!", embed=embed)
281
282 @with_role(*STAFF_ROLES)
283 @reddit_group.command(name="subreddits", aliases=("subs",))
284 async def subreddits_command(self, ctx: Context) -> None:
285 """Send a paginated embed of all the subreddits we're relaying."""
286 embed = Embed()
287 embed.title = "Relayed subreddits."
288 embed.colour = Colour.blurple()
289
290 await LinePaginator.paginate(
291 RedditConfig.subreddits,
292 ctx, embed,
293 footer_text="Use the reddit commands along with these to view their posts.",
294 empty=False,
295 max_lines=15
296 )
297
298
299 def setup(bot: Bot) -> None:
300 """Load the Reddit cog."""
301 if not RedditConfig.secret or not RedditConfig.client_id:
302 log.error("Credentials not provided, cog not loaded.")
303 return
304 bot.add_cog(Reddit(bot))
305
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/reddit.py b/bot/cogs/reddit.py
--- a/bot/cogs/reddit.py
+++ b/bot/cogs/reddit.py
@@ -10,6 +10,7 @@
from discord import Colour, Embed, TextChannel
from discord.ext.commands import Cog, Context, group
from discord.ext.tasks import loop
+from discord.utils import escape_markdown
from bot.bot import Bot
from bot.constants import Channels, ERROR_REPLIES, Emojis, Reddit as RedditConfig, STAFF_ROLES, Webhooks
@@ -187,6 +188,8 @@
author = data["author"]
title = textwrap.shorten(data["title"], width=64, placeholder="...")
+ # Normal brackets interfere with Markdown.
+ title = escape_markdown(title).replace("[", "⦋").replace("]", "⦌")
link = self.URL + data["permalink"]
embed.description += (
| {"golden_diff": "diff --git a/bot/cogs/reddit.py b/bot/cogs/reddit.py\n--- a/bot/cogs/reddit.py\n+++ b/bot/cogs/reddit.py\n@@ -10,6 +10,7 @@\n from discord import Colour, Embed, TextChannel\n from discord.ext.commands import Cog, Context, group\n from discord.ext.tasks import loop\n+from discord.utils import escape_markdown\n \n from bot.bot import Bot\n from bot.constants import Channels, ERROR_REPLIES, Emojis, Reddit as RedditConfig, STAFF_ROLES, Webhooks\n@@ -187,6 +188,8 @@\n author = data[\"author\"]\n \n title = textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\")\n+ # Normal brackets interfere with Markdown.\n+ title = escape_markdown(title).replace(\"[\", \"\u298b\").replace(\"]\", \"\u298c\")\n link = self.URL + data[\"permalink\"]\n \n embed.description += (\n", "issue": "Reddit cog does not escape markdown in post titles\nDiscord markdown in reddit post titles is left unhandled and can sometimes break the links:\r\n\r\n\r\nFor the basic markdown passing it through d.py's `escape_markdown` should work, but from a quick look I haven't found a way to escape brackets in post titles, which breaks the text links. A replacement with similar unicode chars is an option\n", "before_files": [{"content": "import asyncio\nimport logging\nimport random\nimport textwrap\nfrom collections import namedtuple\nfrom datetime import datetime, timedelta\nfrom typing import List\n\nfrom aiohttp import BasicAuth, ClientError\nfrom discord import Colour, Embed, TextChannel\nfrom discord.ext.commands import Cog, Context, group\nfrom discord.ext.tasks import loop\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, ERROR_REPLIES, Emojis, Reddit as RedditConfig, STAFF_ROLES, Webhooks\nfrom bot.converters import Subreddit\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\nfrom bot.utils.messages import sub_clyde\n\nlog = logging.getLogger(__name__)\n\nAccessToken = namedtuple(\"AccessToken\", [\"token\", \"expires_at\"])\n\n\nclass Reddit(Cog):\n \"\"\"Track subreddit posts and show detailed statistics about them.\"\"\"\n\n HEADERS = {\"User-Agent\": \"python3:python-discord/bot:1.0.0 (by /u/PythonDiscord)\"}\n URL = \"https://www.reddit.com\"\n OAUTH_URL = \"https://oauth.reddit.com\"\n MAX_RETRIES = 3\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.webhook = None\n self.access_token = None\n self.client_auth = BasicAuth(RedditConfig.client_id, RedditConfig.secret)\n\n bot.loop.create_task(self.init_reddit_ready())\n self.auto_poster_loop.start()\n\n def cog_unload(self) -> None:\n \"\"\"Stop the loop task and revoke the access token when the cog is unloaded.\"\"\"\n self.auto_poster_loop.cancel()\n if self.access_token and self.access_token.expires_at > datetime.utcnow():\n asyncio.create_task(self.revoke_access_token())\n\n async def init_reddit_ready(self) -> None:\n \"\"\"Sets the reddit webhook when the cog is loaded.\"\"\"\n await self.bot.wait_until_guild_available()\n if not self.webhook:\n self.webhook = await self.bot.fetch_webhook(Webhooks.reddit)\n\n @property\n def channel(self) -> TextChannel:\n \"\"\"Get the #reddit channel object from the bot's cache.\"\"\"\n return self.bot.get_channel(Channels.reddit)\n\n async def get_access_token(self) -> None:\n \"\"\"\n Get a Reddit API OAuth2 access token and assign it to self.access_token.\n\n A token is valid for 1 hour. There will be MAX_RETRIES to get a token, after which the cog\n will be unloaded and a ClientError raised if retrieval was still unsuccessful.\n \"\"\"\n for i in range(1, self.MAX_RETRIES + 1):\n response = await self.bot.http_session.post(\n url=f\"{self.URL}/api/v1/access_token\",\n headers=self.HEADERS,\n auth=self.client_auth,\n data={\n \"grant_type\": \"client_credentials\",\n \"duration\": \"temporary\"\n }\n )\n\n if response.status == 200 and response.content_type == \"application/json\":\n content = await response.json()\n expiration = int(content[\"expires_in\"]) - 60 # Subtract 1 minute for leeway.\n self.access_token = AccessToken(\n token=content[\"access_token\"],\n expires_at=datetime.utcnow() + timedelta(seconds=expiration)\n )\n\n log.debug(f\"New token acquired; expires on UTC {self.access_token.expires_at}\")\n return\n else:\n log.debug(\n f\"Failed to get an access token: \"\n f\"status {response.status} & content type {response.content_type}; \"\n f\"retrying ({i}/{self.MAX_RETRIES})\"\n )\n\n await asyncio.sleep(3)\n\n self.bot.remove_cog(self.qualified_name)\n raise ClientError(\"Authentication with the Reddit API failed. Unloading the cog.\")\n\n async def revoke_access_token(self) -> None:\n \"\"\"\n Revoke the OAuth2 access token for the Reddit API.\n\n For security reasons, it's good practice to revoke the token when it's no longer being used.\n \"\"\"\n response = await self.bot.http_session.post(\n url=f\"{self.URL}/api/v1/revoke_token\",\n headers=self.HEADERS,\n auth=self.client_auth,\n data={\n \"token\": self.access_token.token,\n \"token_type_hint\": \"access_token\"\n }\n )\n\n if response.status == 204 and response.content_type == \"application/json\":\n self.access_token = None\n else:\n log.warning(f\"Unable to revoke access token: status {response.status}.\")\n\n async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:\n \"\"\"A helper method to fetch a certain amount of Reddit posts at a given route.\"\"\"\n # Reddit's JSON responses only provide 25 posts at most.\n if not 25 >= amount > 0:\n raise ValueError(\"Invalid amount of subreddit posts requested.\")\n\n # Renew the token if necessary.\n if not self.access_token or self.access_token.expires_at < datetime.utcnow():\n await self.get_access_token()\n\n url = f\"{self.OAUTH_URL}/{route}\"\n for _ in range(self.MAX_RETRIES):\n response = await self.bot.http_session.get(\n url=url,\n headers={**self.HEADERS, \"Authorization\": f\"bearer {self.access_token.token}\"},\n params=params\n )\n if response.status == 200 and response.content_type == 'application/json':\n # Got appropriate response - process and return.\n content = await response.json()\n posts = content[\"data\"][\"children\"]\n return posts[:amount]\n\n await asyncio.sleep(3)\n\n log.debug(f\"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}\")\n return list() # Failed to get appropriate response within allowed number of retries.\n\n async def get_top_posts(self, subreddit: Subreddit, time: str = \"all\", amount: int = 5) -> Embed:\n \"\"\"\n Get the top amount of posts for a given subreddit within a specified timeframe.\n\n A time of \"all\" will get posts from all time, \"day\" will get top daily posts and \"week\" will get the top\n weekly posts.\n\n The amount should be between 0 and 25 as Reddit's JSON requests only provide 25 posts at most.\n \"\"\"\n embed = Embed(description=\"\")\n\n posts = await self.fetch_posts(\n route=f\"{subreddit}/top\",\n amount=amount,\n params={\"t\": time}\n )\n\n if not posts:\n embed.title = random.choice(ERROR_REPLIES)\n embed.colour = Colour.red()\n embed.description = (\n \"Sorry! We couldn't find any posts from that subreddit. \"\n \"If this problem persists, please let us know.\"\n )\n\n return embed\n\n for post in posts:\n data = post[\"data\"]\n\n text = data[\"selftext\"]\n if text:\n text = textwrap.shorten(text, width=128, placeholder=\"...\")\n text += \"\\n\" # Add newline to separate embed info\n\n ups = data[\"ups\"]\n comments = data[\"num_comments\"]\n author = data[\"author\"]\n\n title = textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\")\n link = self.URL + data[\"permalink\"]\n\n embed.description += (\n f\"**[{title}]({link})**\\n\"\n f\"{text}\"\n f\"{Emojis.upvotes} {ups} {Emojis.comments} {comments} {Emojis.user} {author}\\n\\n\"\n )\n\n embed.colour = Colour.blurple()\n return embed\n\n @loop()\n async def auto_poster_loop(self) -> None:\n \"\"\"Post the top 5 posts daily, and the top 5 posts weekly.\"\"\"\n # once we upgrade to d.py 1.3 this can be removed and the loop can use the `time=datetime.time.min` parameter\n now = datetime.utcnow()\n tomorrow = now + timedelta(days=1)\n midnight_tomorrow = tomorrow.replace(hour=0, minute=0, second=0)\n seconds_until = (midnight_tomorrow - now).total_seconds()\n\n await asyncio.sleep(seconds_until)\n\n await self.bot.wait_until_guild_available()\n if not self.webhook:\n await self.bot.fetch_webhook(Webhooks.reddit)\n\n if datetime.utcnow().weekday() == 0:\n await self.top_weekly_posts()\n # if it's a monday send the top weekly posts\n\n for subreddit in RedditConfig.subreddits:\n top_posts = await self.get_top_posts(subreddit=subreddit, time=\"day\")\n username = sub_clyde(f\"{subreddit} Top Daily Posts\")\n message = await self.webhook.send(username=username, embed=top_posts, wait=True)\n\n if message.channel.is_news():\n await message.publish()\n\n async def top_weekly_posts(self) -> None:\n \"\"\"Post a summary of the top posts.\"\"\"\n for subreddit in RedditConfig.subreddits:\n # Send and pin the new weekly posts.\n top_posts = await self.get_top_posts(subreddit=subreddit, time=\"week\")\n username = sub_clyde(f\"{subreddit} Top Weekly Posts\")\n message = await self.webhook.send(wait=True, username=username, embed=top_posts)\n\n if subreddit.lower() == \"r/python\":\n if not self.channel:\n log.warning(\"Failed to get #reddit channel to remove pins in the weekly loop.\")\n return\n\n # Remove the oldest pins so that only 12 remain at most.\n pins = await self.channel.pins()\n\n while len(pins) >= 12:\n await pins[-1].unpin()\n del pins[-1]\n\n await message.pin()\n\n if message.channel.is_news():\n await message.publish()\n\n @group(name=\"reddit\", invoke_without_command=True)\n async def reddit_group(self, ctx: Context) -> None:\n \"\"\"View the top posts from various subreddits.\"\"\"\n await ctx.send_help(ctx.command)\n\n @reddit_group.command(name=\"top\")\n async def top_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of all time from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"all\")\n\n await ctx.send(content=f\"Here are the top {subreddit} posts of all time!\", embed=embed)\n\n @reddit_group.command(name=\"daily\")\n async def daily_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of today from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"day\")\n\n await ctx.send(content=f\"Here are today's top {subreddit} posts!\", embed=embed)\n\n @reddit_group.command(name=\"weekly\")\n async def weekly_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of this week from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"week\")\n\n await ctx.send(content=f\"Here are this week's top {subreddit} posts!\", embed=embed)\n\n @with_role(*STAFF_ROLES)\n @reddit_group.command(name=\"subreddits\", aliases=(\"subs\",))\n async def subreddits_command(self, ctx: Context) -> None:\n \"\"\"Send a paginated embed of all the subreddits we're relaying.\"\"\"\n embed = Embed()\n embed.title = \"Relayed subreddits.\"\n embed.colour = Colour.blurple()\n\n await LinePaginator.paginate(\n RedditConfig.subreddits,\n ctx, embed,\n footer_text=\"Use the reddit commands along with these to view their posts.\",\n empty=False,\n max_lines=15\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Reddit cog.\"\"\"\n if not RedditConfig.secret or not RedditConfig.client_id:\n log.error(\"Credentials not provided, cog not loaded.\")\n return\n bot.add_cog(Reddit(bot))\n", "path": "bot/cogs/reddit.py"}], "after_files": [{"content": "import asyncio\nimport logging\nimport random\nimport textwrap\nfrom collections import namedtuple\nfrom datetime import datetime, timedelta\nfrom typing import List\n\nfrom aiohttp import BasicAuth, ClientError\nfrom discord import Colour, Embed, TextChannel\nfrom discord.ext.commands import Cog, Context, group\nfrom discord.ext.tasks import loop\nfrom discord.utils import escape_markdown\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, ERROR_REPLIES, Emojis, Reddit as RedditConfig, STAFF_ROLES, Webhooks\nfrom bot.converters import Subreddit\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\nfrom bot.utils.messages import sub_clyde\n\nlog = logging.getLogger(__name__)\n\nAccessToken = namedtuple(\"AccessToken\", [\"token\", \"expires_at\"])\n\n\nclass Reddit(Cog):\n \"\"\"Track subreddit posts and show detailed statistics about them.\"\"\"\n\n HEADERS = {\"User-Agent\": \"python3:python-discord/bot:1.0.0 (by /u/PythonDiscord)\"}\n URL = \"https://www.reddit.com\"\n OAUTH_URL = \"https://oauth.reddit.com\"\n MAX_RETRIES = 3\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.webhook = None\n self.access_token = None\n self.client_auth = BasicAuth(RedditConfig.client_id, RedditConfig.secret)\n\n bot.loop.create_task(self.init_reddit_ready())\n self.auto_poster_loop.start()\n\n def cog_unload(self) -> None:\n \"\"\"Stop the loop task and revoke the access token when the cog is unloaded.\"\"\"\n self.auto_poster_loop.cancel()\n if self.access_token and self.access_token.expires_at > datetime.utcnow():\n asyncio.create_task(self.revoke_access_token())\n\n async def init_reddit_ready(self) -> None:\n \"\"\"Sets the reddit webhook when the cog is loaded.\"\"\"\n await self.bot.wait_until_guild_available()\n if not self.webhook:\n self.webhook = await self.bot.fetch_webhook(Webhooks.reddit)\n\n @property\n def channel(self) -> TextChannel:\n \"\"\"Get the #reddit channel object from the bot's cache.\"\"\"\n return self.bot.get_channel(Channels.reddit)\n\n async def get_access_token(self) -> None:\n \"\"\"\n Get a Reddit API OAuth2 access token and assign it to self.access_token.\n\n A token is valid for 1 hour. There will be MAX_RETRIES to get a token, after which the cog\n will be unloaded and a ClientError raised if retrieval was still unsuccessful.\n \"\"\"\n for i in range(1, self.MAX_RETRIES + 1):\n response = await self.bot.http_session.post(\n url=f\"{self.URL}/api/v1/access_token\",\n headers=self.HEADERS,\n auth=self.client_auth,\n data={\n \"grant_type\": \"client_credentials\",\n \"duration\": \"temporary\"\n }\n )\n\n if response.status == 200 and response.content_type == \"application/json\":\n content = await response.json()\n expiration = int(content[\"expires_in\"]) - 60 # Subtract 1 minute for leeway.\n self.access_token = AccessToken(\n token=content[\"access_token\"],\n expires_at=datetime.utcnow() + timedelta(seconds=expiration)\n )\n\n log.debug(f\"New token acquired; expires on UTC {self.access_token.expires_at}\")\n return\n else:\n log.debug(\n f\"Failed to get an access token: \"\n f\"status {response.status} & content type {response.content_type}; \"\n f\"retrying ({i}/{self.MAX_RETRIES})\"\n )\n\n await asyncio.sleep(3)\n\n self.bot.remove_cog(self.qualified_name)\n raise ClientError(\"Authentication with the Reddit API failed. Unloading the cog.\")\n\n async def revoke_access_token(self) -> None:\n \"\"\"\n Revoke the OAuth2 access token for the Reddit API.\n\n For security reasons, it's good practice to revoke the token when it's no longer being used.\n \"\"\"\n response = await self.bot.http_session.post(\n url=f\"{self.URL}/api/v1/revoke_token\",\n headers=self.HEADERS,\n auth=self.client_auth,\n data={\n \"token\": self.access_token.token,\n \"token_type_hint\": \"access_token\"\n }\n )\n\n if response.status == 204 and response.content_type == \"application/json\":\n self.access_token = None\n else:\n log.warning(f\"Unable to revoke access token: status {response.status}.\")\n\n async def fetch_posts(self, route: str, *, amount: int = 25, params: dict = None) -> List[dict]:\n \"\"\"A helper method to fetch a certain amount of Reddit posts at a given route.\"\"\"\n # Reddit's JSON responses only provide 25 posts at most.\n if not 25 >= amount > 0:\n raise ValueError(\"Invalid amount of subreddit posts requested.\")\n\n # Renew the token if necessary.\n if not self.access_token or self.access_token.expires_at < datetime.utcnow():\n await self.get_access_token()\n\n url = f\"{self.OAUTH_URL}/{route}\"\n for _ in range(self.MAX_RETRIES):\n response = await self.bot.http_session.get(\n url=url,\n headers={**self.HEADERS, \"Authorization\": f\"bearer {self.access_token.token}\"},\n params=params\n )\n if response.status == 200 and response.content_type == 'application/json':\n # Got appropriate response - process and return.\n content = await response.json()\n posts = content[\"data\"][\"children\"]\n return posts[:amount]\n\n await asyncio.sleep(3)\n\n log.debug(f\"Invalid response from: {url} - status code {response.status}, mimetype {response.content_type}\")\n return list() # Failed to get appropriate response within allowed number of retries.\n\n async def get_top_posts(self, subreddit: Subreddit, time: str = \"all\", amount: int = 5) -> Embed:\n \"\"\"\n Get the top amount of posts for a given subreddit within a specified timeframe.\n\n A time of \"all\" will get posts from all time, \"day\" will get top daily posts and \"week\" will get the top\n weekly posts.\n\n The amount should be between 0 and 25 as Reddit's JSON requests only provide 25 posts at most.\n \"\"\"\n embed = Embed(description=\"\")\n\n posts = await self.fetch_posts(\n route=f\"{subreddit}/top\",\n amount=amount,\n params={\"t\": time}\n )\n\n if not posts:\n embed.title = random.choice(ERROR_REPLIES)\n embed.colour = Colour.red()\n embed.description = (\n \"Sorry! We couldn't find any posts from that subreddit. \"\n \"If this problem persists, please let us know.\"\n )\n\n return embed\n\n for post in posts:\n data = post[\"data\"]\n\n text = data[\"selftext\"]\n if text:\n text = textwrap.shorten(text, width=128, placeholder=\"...\")\n text += \"\\n\" # Add newline to separate embed info\n\n ups = data[\"ups\"]\n comments = data[\"num_comments\"]\n author = data[\"author\"]\n\n title = textwrap.shorten(data[\"title\"], width=64, placeholder=\"...\")\n # Normal brackets interfere with Markdown.\n title = escape_markdown(title).replace(\"[\", \"\u298b\").replace(\"]\", \"\u298c\")\n link = self.URL + data[\"permalink\"]\n\n embed.description += (\n f\"**[{title}]({link})**\\n\"\n f\"{text}\"\n f\"{Emojis.upvotes} {ups} {Emojis.comments} {comments} {Emojis.user} {author}\\n\\n\"\n )\n\n embed.colour = Colour.blurple()\n return embed\n\n @loop()\n async def auto_poster_loop(self) -> None:\n \"\"\"Post the top 5 posts daily, and the top 5 posts weekly.\"\"\"\n # once we upgrade to d.py 1.3 this can be removed and the loop can use the `time=datetime.time.min` parameter\n now = datetime.utcnow()\n tomorrow = now + timedelta(days=1)\n midnight_tomorrow = tomorrow.replace(hour=0, minute=0, second=0)\n seconds_until = (midnight_tomorrow - now).total_seconds()\n\n await asyncio.sleep(seconds_until)\n\n await self.bot.wait_until_guild_available()\n if not self.webhook:\n await self.bot.fetch_webhook(Webhooks.reddit)\n\n if datetime.utcnow().weekday() == 0:\n await self.top_weekly_posts()\n # if it's a monday send the top weekly posts\n\n for subreddit in RedditConfig.subreddits:\n top_posts = await self.get_top_posts(subreddit=subreddit, time=\"day\")\n username = sub_clyde(f\"{subreddit} Top Daily Posts\")\n message = await self.webhook.send(username=username, embed=top_posts, wait=True)\n\n if message.channel.is_news():\n await message.publish()\n\n async def top_weekly_posts(self) -> None:\n \"\"\"Post a summary of the top posts.\"\"\"\n for subreddit in RedditConfig.subreddits:\n # Send and pin the new weekly posts.\n top_posts = await self.get_top_posts(subreddit=subreddit, time=\"week\")\n username = sub_clyde(f\"{subreddit} Top Weekly Posts\")\n message = await self.webhook.send(wait=True, username=username, embed=top_posts)\n\n if subreddit.lower() == \"r/python\":\n if not self.channel:\n log.warning(\"Failed to get #reddit channel to remove pins in the weekly loop.\")\n return\n\n # Remove the oldest pins so that only 12 remain at most.\n pins = await self.channel.pins()\n\n while len(pins) >= 12:\n await pins[-1].unpin()\n del pins[-1]\n\n await message.pin()\n\n if message.channel.is_news():\n await message.publish()\n\n @group(name=\"reddit\", invoke_without_command=True)\n async def reddit_group(self, ctx: Context) -> None:\n \"\"\"View the top posts from various subreddits.\"\"\"\n await ctx.send_help(ctx.command)\n\n @reddit_group.command(name=\"top\")\n async def top_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of all time from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"all\")\n\n await ctx.send(content=f\"Here are the top {subreddit} posts of all time!\", embed=embed)\n\n @reddit_group.command(name=\"daily\")\n async def daily_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of today from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"day\")\n\n await ctx.send(content=f\"Here are today's top {subreddit} posts!\", embed=embed)\n\n @reddit_group.command(name=\"weekly\")\n async def weekly_command(self, ctx: Context, subreddit: Subreddit = \"r/Python\") -> None:\n \"\"\"Send the top posts of this week from a given subreddit.\"\"\"\n async with ctx.typing():\n embed = await self.get_top_posts(subreddit=subreddit, time=\"week\")\n\n await ctx.send(content=f\"Here are this week's top {subreddit} posts!\", embed=embed)\n\n @with_role(*STAFF_ROLES)\n @reddit_group.command(name=\"subreddits\", aliases=(\"subs\",))\n async def subreddits_command(self, ctx: Context) -> None:\n \"\"\"Send a paginated embed of all the subreddits we're relaying.\"\"\"\n embed = Embed()\n embed.title = \"Relayed subreddits.\"\n embed.colour = Colour.blurple()\n\n await LinePaginator.paginate(\n RedditConfig.subreddits,\n ctx, embed,\n footer_text=\"Use the reddit commands along with these to view their posts.\",\n empty=False,\n max_lines=15\n )\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Reddit cog.\"\"\"\n if not RedditConfig.secret or not RedditConfig.client_id:\n log.error(\"Credentials not provided, cog not loaded.\")\n return\n bot.add_cog(Reddit(bot))\n", "path": "bot/cogs/reddit.py"}]} | 3,811 | 212 |
gh_patches_debug_5299 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-7147 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confusion matrix error
While trying to generate the confusion matrix with this command:
```
python tools/analysis_tools/confusion_matrix.py ./work_dirs/perception-types--D06-01-2022--T09-23-45/perception-types.py results.pkl ./temp --show
```
I ran into this error:
```
Traceback (most recent call last):
File "tools/analysis_tools/confusion_matrix.py", line 261, in <module>
main()
File "tools/analysis_tools/confusion_matrix.py", line 257, in main
show=args.show)
File "tools/analysis_tools/confusion_matrix.py", line 210, in plot_confusion_matrix
'{}%'.format(int(confusion_matrix[i, j])),
ValueError: cannot convert float NaN to integer
```
Would appreciate any help or suggestions! Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/analysis_tools/confusion_matrix.py`
Content:
```
1 import argparse
2 import os
3
4 import matplotlib.pyplot as plt
5 import mmcv
6 import numpy as np
7 from matplotlib.ticker import MultipleLocator
8 from mmcv import Config, DictAction
9 from mmcv.ops import nms
10
11 from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps
12 from mmdet.datasets import build_dataset
13
14
15 def parse_args():
16 parser = argparse.ArgumentParser(
17 description='Generate confusion matrix from detection results')
18 parser.add_argument('config', help='test config file path')
19 parser.add_argument(
20 'prediction_path', help='prediction path where test .pkl result')
21 parser.add_argument(
22 'save_dir', help='directory where confusion matrix will be saved')
23 parser.add_argument(
24 '--show', action='store_true', help='show confusion matrix')
25 parser.add_argument(
26 '--color-theme',
27 default='plasma',
28 help='theme of the matrix color map')
29 parser.add_argument(
30 '--score-thr',
31 type=float,
32 default=0.3,
33 help='score threshold to filter detection bboxes')
34 parser.add_argument(
35 '--tp-iou-thr',
36 type=float,
37 default=0.5,
38 help='IoU threshold to be considered as matched')
39 parser.add_argument(
40 '--nms-iou-thr',
41 type=float,
42 default=None,
43 help='nms IoU threshold, only applied when users want to change the'
44 'nms IoU threshold.')
45 parser.add_argument(
46 '--cfg-options',
47 nargs='+',
48 action=DictAction,
49 help='override some settings in the used config, the key-value pair '
50 'in xxx=yyy format will be merged into config file. If the value to '
51 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
52 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
53 'Note that the quotation marks are necessary and that no white space '
54 'is allowed.')
55 args = parser.parse_args()
56 return args
57
58
59 def calculate_confusion_matrix(dataset,
60 results,
61 score_thr=0,
62 nms_iou_thr=None,
63 tp_iou_thr=0.5):
64 """Calculate the confusion matrix.
65
66 Args:
67 dataset (Dataset): Test or val dataset.
68 results (list[ndarray]): A list of detection results in each image.
69 score_thr (float|optional): Score threshold to filter bboxes.
70 Default: 0.
71 nms_iou_thr (float|optional): nms IoU threshold, the detection results
72 have done nms in the detector, only applied when users want to
73 change the nms IoU threshold. Default: None.
74 tp_iou_thr (float|optional): IoU threshold to be considered as matched.
75 Default: 0.5.
76 """
77 num_classes = len(dataset.CLASSES)
78 confusion_matrix = np.zeros(shape=[num_classes + 1, num_classes + 1])
79 assert len(dataset) == len(results)
80 prog_bar = mmcv.ProgressBar(len(results))
81 for idx, per_img_res in enumerate(results):
82 if isinstance(per_img_res, tuple):
83 res_bboxes, _ = per_img_res
84 else:
85 res_bboxes = per_img_res
86 ann = dataset.get_ann_info(idx)
87 gt_bboxes = ann['bboxes']
88 labels = ann['labels']
89 analyze_per_img_dets(confusion_matrix, gt_bboxes, labels, res_bboxes,
90 score_thr, tp_iou_thr, nms_iou_thr)
91 prog_bar.update()
92 return confusion_matrix
93
94
95 def analyze_per_img_dets(confusion_matrix,
96 gt_bboxes,
97 gt_labels,
98 result,
99 score_thr=0,
100 tp_iou_thr=0.5,
101 nms_iou_thr=None):
102 """Analyze detection results on each image.
103
104 Args:
105 confusion_matrix (ndarray): The confusion matrix,
106 has shape (num_classes + 1, num_classes + 1).
107 gt_bboxes (ndarray): Ground truth bboxes, has shape (num_gt, 4).
108 gt_labels (ndarray): Ground truth labels, has shape (num_gt).
109 result (ndarray): Detection results, has shape
110 (num_classes, num_bboxes, 5).
111 score_thr (float): Score threshold to filter bboxes.
112 Default: 0.
113 tp_iou_thr (float): IoU threshold to be considered as matched.
114 Default: 0.5.
115 nms_iou_thr (float|optional): nms IoU threshold, the detection results
116 have done nms in the detector, only applied when users want to
117 change the nms IoU threshold. Default: None.
118 """
119 true_positives = np.zeros_like(gt_labels)
120 for det_label, det_bboxes in enumerate(result):
121 if nms_iou_thr:
122 det_bboxes, _ = nms(
123 det_bboxes[:, :4],
124 det_bboxes[:, -1],
125 nms_iou_thr,
126 score_threshold=score_thr)
127 ious = bbox_overlaps(det_bboxes[:, :4], gt_bboxes)
128 for i, det_bbox in enumerate(det_bboxes):
129 score = det_bbox[4]
130 det_match = 0
131 if score >= score_thr:
132 for j, gt_label in enumerate(gt_labels):
133 if ious[i, j] >= tp_iou_thr:
134 det_match += 1
135 if gt_label == det_label:
136 true_positives[j] += 1 # TP
137 confusion_matrix[gt_label, det_label] += 1
138 if det_match == 0: # BG FP
139 confusion_matrix[-1, det_label] += 1
140 for num_tp, gt_label in zip(true_positives, gt_labels):
141 if num_tp == 0: # FN
142 confusion_matrix[gt_label, -1] += 1
143
144
145 def plot_confusion_matrix(confusion_matrix,
146 labels,
147 save_dir=None,
148 show=True,
149 title='Normalized Confusion Matrix',
150 color_theme='plasma'):
151 """Draw confusion matrix with matplotlib.
152
153 Args:
154 confusion_matrix (ndarray): The confusion matrix.
155 labels (list[str]): List of class names.
156 save_dir (str|optional): If set, save the confusion matrix plot to the
157 given path. Default: None.
158 show (bool): Whether to show the plot. Default: True.
159 title (str): Title of the plot. Default: `Normalized Confusion Matrix`.
160 color_theme (str): Theme of the matrix color map. Default: `plasma`.
161 """
162 # normalize the confusion matrix
163 per_label_sums = confusion_matrix.sum(axis=1)[:, np.newaxis]
164 confusion_matrix = \
165 confusion_matrix.astype(np.float32) / per_label_sums * 100
166
167 num_classes = len(labels)
168 fig, ax = plt.subplots(
169 figsize=(0.5 * num_classes, 0.5 * num_classes * 0.8), dpi=180)
170 cmap = plt.get_cmap(color_theme)
171 im = ax.imshow(confusion_matrix, cmap=cmap)
172 plt.colorbar(mappable=im, ax=ax)
173
174 title_font = {'weight': 'bold', 'size': 12}
175 ax.set_title(title, fontdict=title_font)
176 label_font = {'size': 10}
177 plt.ylabel('Ground Truth Label', fontdict=label_font)
178 plt.xlabel('Prediction Label', fontdict=label_font)
179
180 # draw locator
181 xmajor_locator = MultipleLocator(1)
182 xminor_locator = MultipleLocator(0.5)
183 ax.xaxis.set_major_locator(xmajor_locator)
184 ax.xaxis.set_minor_locator(xminor_locator)
185 ymajor_locator = MultipleLocator(1)
186 yminor_locator = MultipleLocator(0.5)
187 ax.yaxis.set_major_locator(ymajor_locator)
188 ax.yaxis.set_minor_locator(yminor_locator)
189
190 # draw grid
191 ax.grid(True, which='minor', linestyle='-')
192
193 # draw label
194 ax.set_xticks(np.arange(num_classes))
195 ax.set_yticks(np.arange(num_classes))
196 ax.set_xticklabels(labels)
197 ax.set_yticklabels(labels)
198
199 ax.tick_params(
200 axis='x', bottom=False, top=True, labelbottom=False, labeltop=True)
201 plt.setp(
202 ax.get_xticklabels(), rotation=45, ha='left', rotation_mode='anchor')
203
204 # draw confution matrix value
205 for i in range(num_classes):
206 for j in range(num_classes):
207 ax.text(
208 j,
209 i,
210 '{}%'.format(int(confusion_matrix[i, j])),
211 ha='center',
212 va='center',
213 color='w',
214 size=7)
215
216 ax.set_ylim(len(confusion_matrix) - 0.5, -0.5) # matplotlib>3.1.1
217
218 fig.tight_layout()
219 if save_dir is not None:
220 plt.savefig(
221 os.path.join(save_dir, 'confusion_matrix.png'), format='png')
222 if show:
223 plt.show()
224
225
226 def main():
227 args = parse_args()
228
229 cfg = Config.fromfile(args.config)
230 if args.cfg_options is not None:
231 cfg.merge_from_dict(args.cfg_options)
232
233 results = mmcv.load(args.prediction_path)
234 assert isinstance(results, list)
235 if isinstance(results[0], list):
236 pass
237 elif isinstance(results[0], tuple):
238 results = [result[0] for result in results]
239 else:
240 raise TypeError('invalid type of prediction results')
241
242 if isinstance(cfg.data.test, dict):
243 cfg.data.test.test_mode = True
244 elif isinstance(cfg.data.test, list):
245 for ds_cfg in cfg.data.test:
246 ds_cfg.test_mode = True
247 dataset = build_dataset(cfg.data.test)
248
249 confusion_matrix = calculate_confusion_matrix(dataset, results,
250 args.score_thr,
251 args.nms_iou_thr,
252 args.tp_iou_thr)
253 plot_confusion_matrix(
254 confusion_matrix,
255 dataset.CLASSES + ('background', ),
256 save_dir=args.save_dir,
257 show=args.show)
258
259
260 if __name__ == '__main__':
261 main()
262
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/analysis_tools/confusion_matrix.py b/tools/analysis_tools/confusion_matrix.py
--- a/tools/analysis_tools/confusion_matrix.py
+++ b/tools/analysis_tools/confusion_matrix.py
@@ -207,7 +207,10 @@
ax.text(
j,
i,
- '{}%'.format(int(confusion_matrix[i, j])),
+ '{}%'.format(
+ int(confusion_matrix[
+ i,
+ j]) if not np.isnan(confusion_matrix[i, j]) else -1),
ha='center',
va='center',
color='w',
| {"golden_diff": "diff --git a/tools/analysis_tools/confusion_matrix.py b/tools/analysis_tools/confusion_matrix.py\n--- a/tools/analysis_tools/confusion_matrix.py\n+++ b/tools/analysis_tools/confusion_matrix.py\n@@ -207,7 +207,10 @@\n ax.text(\n j,\n i,\n- '{}%'.format(int(confusion_matrix[i, j])),\n+ '{}%'.format(\n+ int(confusion_matrix[\n+ i,\n+ j]) if not np.isnan(confusion_matrix[i, j]) else -1),\n ha='center',\n va='center',\n color='w',\n", "issue": "Confusion matrix error\nWhile trying to generate the confusion matrix with this command:\r\n```\r\npython tools/analysis_tools/confusion_matrix.py ./work_dirs/perception-types--D06-01-2022--T09-23-45/perception-types.py results.pkl ./temp --show\r\n```\r\nI ran into this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"tools/analysis_tools/confusion_matrix.py\", line 261, in <module>\r\n main()\r\n File \"tools/analysis_tools/confusion_matrix.py\", line 257, in main\r\n show=args.show)\r\n File \"tools/analysis_tools/confusion_matrix.py\", line 210, in plot_confusion_matrix\r\n '{}%'.format(int(confusion_matrix[i, j])),\r\nValueError: cannot convert float NaN to integer\r\n```\r\nWould appreciate any help or suggestions! Thanks\r\n\n", "before_files": [{"content": "import argparse\nimport os\n\nimport matplotlib.pyplot as plt\nimport mmcv\nimport numpy as np\nfrom matplotlib.ticker import MultipleLocator\nfrom mmcv import Config, DictAction\nfrom mmcv.ops import nms\n\nfrom mmdet.core.evaluation.bbox_overlaps import bbox_overlaps\nfrom mmdet.datasets import build_dataset\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Generate confusion matrix from detection results')\n parser.add_argument('config', help='test config file path')\n parser.add_argument(\n 'prediction_path', help='prediction path where test .pkl result')\n parser.add_argument(\n 'save_dir', help='directory where confusion matrix will be saved')\n parser.add_argument(\n '--show', action='store_true', help='show confusion matrix')\n parser.add_argument(\n '--color-theme',\n default='plasma',\n help='theme of the matrix color map')\n parser.add_argument(\n '--score-thr',\n type=float,\n default=0.3,\n help='score threshold to filter detection bboxes')\n parser.add_argument(\n '--tp-iou-thr',\n type=float,\n default=0.5,\n help='IoU threshold to be considered as matched')\n parser.add_argument(\n '--nms-iou-thr',\n type=float,\n default=None,\n help='nms IoU threshold, only applied when users want to change the'\n 'nms IoU threshold.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n args = parser.parse_args()\n return args\n\n\ndef calculate_confusion_matrix(dataset,\n results,\n score_thr=0,\n nms_iou_thr=None,\n tp_iou_thr=0.5):\n \"\"\"Calculate the confusion matrix.\n\n Args:\n dataset (Dataset): Test or val dataset.\n results (list[ndarray]): A list of detection results in each image.\n score_thr (float|optional): Score threshold to filter bboxes.\n Default: 0.\n nms_iou_thr (float|optional): nms IoU threshold, the detection results\n have done nms in the detector, only applied when users want to\n change the nms IoU threshold. Default: None.\n tp_iou_thr (float|optional): IoU threshold to be considered as matched.\n Default: 0.5.\n \"\"\"\n num_classes = len(dataset.CLASSES)\n confusion_matrix = np.zeros(shape=[num_classes + 1, num_classes + 1])\n assert len(dataset) == len(results)\n prog_bar = mmcv.ProgressBar(len(results))\n for idx, per_img_res in enumerate(results):\n if isinstance(per_img_res, tuple):\n res_bboxes, _ = per_img_res\n else:\n res_bboxes = per_img_res\n ann = dataset.get_ann_info(idx)\n gt_bboxes = ann['bboxes']\n labels = ann['labels']\n analyze_per_img_dets(confusion_matrix, gt_bboxes, labels, res_bboxes,\n score_thr, tp_iou_thr, nms_iou_thr)\n prog_bar.update()\n return confusion_matrix\n\n\ndef analyze_per_img_dets(confusion_matrix,\n gt_bboxes,\n gt_labels,\n result,\n score_thr=0,\n tp_iou_thr=0.5,\n nms_iou_thr=None):\n \"\"\"Analyze detection results on each image.\n\n Args:\n confusion_matrix (ndarray): The confusion matrix,\n has shape (num_classes + 1, num_classes + 1).\n gt_bboxes (ndarray): Ground truth bboxes, has shape (num_gt, 4).\n gt_labels (ndarray): Ground truth labels, has shape (num_gt).\n result (ndarray): Detection results, has shape\n (num_classes, num_bboxes, 5).\n score_thr (float): Score threshold to filter bboxes.\n Default: 0.\n tp_iou_thr (float): IoU threshold to be considered as matched.\n Default: 0.5.\n nms_iou_thr (float|optional): nms IoU threshold, the detection results\n have done nms in the detector, only applied when users want to\n change the nms IoU threshold. Default: None.\n \"\"\"\n true_positives = np.zeros_like(gt_labels)\n for det_label, det_bboxes in enumerate(result):\n if nms_iou_thr:\n det_bboxes, _ = nms(\n det_bboxes[:, :4],\n det_bboxes[:, -1],\n nms_iou_thr,\n score_threshold=score_thr)\n ious = bbox_overlaps(det_bboxes[:, :4], gt_bboxes)\n for i, det_bbox in enumerate(det_bboxes):\n score = det_bbox[4]\n det_match = 0\n if score >= score_thr:\n for j, gt_label in enumerate(gt_labels):\n if ious[i, j] >= tp_iou_thr:\n det_match += 1\n if gt_label == det_label:\n true_positives[j] += 1 # TP\n confusion_matrix[gt_label, det_label] += 1\n if det_match == 0: # BG FP\n confusion_matrix[-1, det_label] += 1\n for num_tp, gt_label in zip(true_positives, gt_labels):\n if num_tp == 0: # FN\n confusion_matrix[gt_label, -1] += 1\n\n\ndef plot_confusion_matrix(confusion_matrix,\n labels,\n save_dir=None,\n show=True,\n title='Normalized Confusion Matrix',\n color_theme='plasma'):\n \"\"\"Draw confusion matrix with matplotlib.\n\n Args:\n confusion_matrix (ndarray): The confusion matrix.\n labels (list[str]): List of class names.\n save_dir (str|optional): If set, save the confusion matrix plot to the\n given path. Default: None.\n show (bool): Whether to show the plot. Default: True.\n title (str): Title of the plot. Default: `Normalized Confusion Matrix`.\n color_theme (str): Theme of the matrix color map. Default: `plasma`.\n \"\"\"\n # normalize the confusion matrix\n per_label_sums = confusion_matrix.sum(axis=1)[:, np.newaxis]\n confusion_matrix = \\\n confusion_matrix.astype(np.float32) / per_label_sums * 100\n\n num_classes = len(labels)\n fig, ax = plt.subplots(\n figsize=(0.5 * num_classes, 0.5 * num_classes * 0.8), dpi=180)\n cmap = plt.get_cmap(color_theme)\n im = ax.imshow(confusion_matrix, cmap=cmap)\n plt.colorbar(mappable=im, ax=ax)\n\n title_font = {'weight': 'bold', 'size': 12}\n ax.set_title(title, fontdict=title_font)\n label_font = {'size': 10}\n plt.ylabel('Ground Truth Label', fontdict=label_font)\n plt.xlabel('Prediction Label', fontdict=label_font)\n\n # draw locator\n xmajor_locator = MultipleLocator(1)\n xminor_locator = MultipleLocator(0.5)\n ax.xaxis.set_major_locator(xmajor_locator)\n ax.xaxis.set_minor_locator(xminor_locator)\n ymajor_locator = MultipleLocator(1)\n yminor_locator = MultipleLocator(0.5)\n ax.yaxis.set_major_locator(ymajor_locator)\n ax.yaxis.set_minor_locator(yminor_locator)\n\n # draw grid\n ax.grid(True, which='minor', linestyle='-')\n\n # draw label\n ax.set_xticks(np.arange(num_classes))\n ax.set_yticks(np.arange(num_classes))\n ax.set_xticklabels(labels)\n ax.set_yticklabels(labels)\n\n ax.tick_params(\n axis='x', bottom=False, top=True, labelbottom=False, labeltop=True)\n plt.setp(\n ax.get_xticklabels(), rotation=45, ha='left', rotation_mode='anchor')\n\n # draw confution matrix value\n for i in range(num_classes):\n for j in range(num_classes):\n ax.text(\n j,\n i,\n '{}%'.format(int(confusion_matrix[i, j])),\n ha='center',\n va='center',\n color='w',\n size=7)\n\n ax.set_ylim(len(confusion_matrix) - 0.5, -0.5) # matplotlib>3.1.1\n\n fig.tight_layout()\n if save_dir is not None:\n plt.savefig(\n os.path.join(save_dir, 'confusion_matrix.png'), format='png')\n if show:\n plt.show()\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n results = mmcv.load(args.prediction_path)\n assert isinstance(results, list)\n if isinstance(results[0], list):\n pass\n elif isinstance(results[0], tuple):\n results = [result[0] for result in results]\n else:\n raise TypeError('invalid type of prediction results')\n\n if isinstance(cfg.data.test, dict):\n cfg.data.test.test_mode = True\n elif isinstance(cfg.data.test, list):\n for ds_cfg in cfg.data.test:\n ds_cfg.test_mode = True\n dataset = build_dataset(cfg.data.test)\n\n confusion_matrix = calculate_confusion_matrix(dataset, results,\n args.score_thr,\n args.nms_iou_thr,\n args.tp_iou_thr)\n plot_confusion_matrix(\n confusion_matrix,\n dataset.CLASSES + ('background', ),\n save_dir=args.save_dir,\n show=args.show)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/analysis_tools/confusion_matrix.py"}], "after_files": [{"content": "import argparse\nimport os\n\nimport matplotlib.pyplot as plt\nimport mmcv\nimport numpy as np\nfrom matplotlib.ticker import MultipleLocator\nfrom mmcv import Config, DictAction\nfrom mmcv.ops import nms\n\nfrom mmdet.core.evaluation.bbox_overlaps import bbox_overlaps\nfrom mmdet.datasets import build_dataset\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Generate confusion matrix from detection results')\n parser.add_argument('config', help='test config file path')\n parser.add_argument(\n 'prediction_path', help='prediction path where test .pkl result')\n parser.add_argument(\n 'save_dir', help='directory where confusion matrix will be saved')\n parser.add_argument(\n '--show', action='store_true', help='show confusion matrix')\n parser.add_argument(\n '--color-theme',\n default='plasma',\n help='theme of the matrix color map')\n parser.add_argument(\n '--score-thr',\n type=float,\n default=0.3,\n help='score threshold to filter detection bboxes')\n parser.add_argument(\n '--tp-iou-thr',\n type=float,\n default=0.5,\n help='IoU threshold to be considered as matched')\n parser.add_argument(\n '--nms-iou-thr',\n type=float,\n default=None,\n help='nms IoU threshold, only applied when users want to change the'\n 'nms IoU threshold.')\n parser.add_argument(\n '--cfg-options',\n nargs='+',\n action=DictAction,\n help='override some settings in the used config, the key-value pair '\n 'in xxx=yyy format will be merged into config file. If the value to '\n 'be overwritten is a list, it should be like key=\"[a,b]\" or key=a,b '\n 'It also allows nested list/tuple values, e.g. key=\"[(a,b),(c,d)]\" '\n 'Note that the quotation marks are necessary and that no white space '\n 'is allowed.')\n args = parser.parse_args()\n return args\n\n\ndef calculate_confusion_matrix(dataset,\n results,\n score_thr=0,\n nms_iou_thr=None,\n tp_iou_thr=0.5):\n \"\"\"Calculate the confusion matrix.\n\n Args:\n dataset (Dataset): Test or val dataset.\n results (list[ndarray]): A list of detection results in each image.\n score_thr (float|optional): Score threshold to filter bboxes.\n Default: 0.\n nms_iou_thr (float|optional): nms IoU threshold, the detection results\n have done nms in the detector, only applied when users want to\n change the nms IoU threshold. Default: None.\n tp_iou_thr (float|optional): IoU threshold to be considered as matched.\n Default: 0.5.\n \"\"\"\n num_classes = len(dataset.CLASSES)\n confusion_matrix = np.zeros(shape=[num_classes + 1, num_classes + 1])\n assert len(dataset) == len(results)\n prog_bar = mmcv.ProgressBar(len(results))\n for idx, per_img_res in enumerate(results):\n if isinstance(per_img_res, tuple):\n res_bboxes, _ = per_img_res\n else:\n res_bboxes = per_img_res\n ann = dataset.get_ann_info(idx)\n gt_bboxes = ann['bboxes']\n labels = ann['labels']\n analyze_per_img_dets(confusion_matrix, gt_bboxes, labels, res_bboxes,\n score_thr, tp_iou_thr, nms_iou_thr)\n prog_bar.update()\n return confusion_matrix\n\n\ndef analyze_per_img_dets(confusion_matrix,\n gt_bboxes,\n gt_labels,\n result,\n score_thr=0,\n tp_iou_thr=0.5,\n nms_iou_thr=None):\n \"\"\"Analyze detection results on each image.\n\n Args:\n confusion_matrix (ndarray): The confusion matrix,\n has shape (num_classes + 1, num_classes + 1).\n gt_bboxes (ndarray): Ground truth bboxes, has shape (num_gt, 4).\n gt_labels (ndarray): Ground truth labels, has shape (num_gt).\n result (ndarray): Detection results, has shape\n (num_classes, num_bboxes, 5).\n score_thr (float): Score threshold to filter bboxes.\n Default: 0.\n tp_iou_thr (float): IoU threshold to be considered as matched.\n Default: 0.5.\n nms_iou_thr (float|optional): nms IoU threshold, the detection results\n have done nms in the detector, only applied when users want to\n change the nms IoU threshold. Default: None.\n \"\"\"\n true_positives = np.zeros_like(gt_labels)\n for det_label, det_bboxes in enumerate(result):\n if nms_iou_thr:\n det_bboxes, _ = nms(\n det_bboxes[:, :4],\n det_bboxes[:, -1],\n nms_iou_thr,\n score_threshold=score_thr)\n ious = bbox_overlaps(det_bboxes[:, :4], gt_bboxes)\n for i, det_bbox in enumerate(det_bboxes):\n score = det_bbox[4]\n det_match = 0\n if score >= score_thr:\n for j, gt_label in enumerate(gt_labels):\n if ious[i, j] >= tp_iou_thr:\n det_match += 1\n if gt_label == det_label:\n true_positives[j] += 1 # TP\n confusion_matrix[gt_label, det_label] += 1\n if det_match == 0: # BG FP\n confusion_matrix[-1, det_label] += 1\n for num_tp, gt_label in zip(true_positives, gt_labels):\n if num_tp == 0: # FN\n confusion_matrix[gt_label, -1] += 1\n\n\ndef plot_confusion_matrix(confusion_matrix,\n labels,\n save_dir=None,\n show=True,\n title='Normalized Confusion Matrix',\n color_theme='plasma'):\n \"\"\"Draw confusion matrix with matplotlib.\n\n Args:\n confusion_matrix (ndarray): The confusion matrix.\n labels (list[str]): List of class names.\n save_dir (str|optional): If set, save the confusion matrix plot to the\n given path. Default: None.\n show (bool): Whether to show the plot. Default: True.\n title (str): Title of the plot. Default: `Normalized Confusion Matrix`.\n color_theme (str): Theme of the matrix color map. Default: `plasma`.\n \"\"\"\n # normalize the confusion matrix\n per_label_sums = confusion_matrix.sum(axis=1)[:, np.newaxis]\n confusion_matrix = \\\n confusion_matrix.astype(np.float32) / per_label_sums * 100\n\n num_classes = len(labels)\n fig, ax = plt.subplots(\n figsize=(0.5 * num_classes, 0.5 * num_classes * 0.8), dpi=180)\n cmap = plt.get_cmap(color_theme)\n im = ax.imshow(confusion_matrix, cmap=cmap)\n plt.colorbar(mappable=im, ax=ax)\n\n title_font = {'weight': 'bold', 'size': 12}\n ax.set_title(title, fontdict=title_font)\n label_font = {'size': 10}\n plt.ylabel('Ground Truth Label', fontdict=label_font)\n plt.xlabel('Prediction Label', fontdict=label_font)\n\n # draw locator\n xmajor_locator = MultipleLocator(1)\n xminor_locator = MultipleLocator(0.5)\n ax.xaxis.set_major_locator(xmajor_locator)\n ax.xaxis.set_minor_locator(xminor_locator)\n ymajor_locator = MultipleLocator(1)\n yminor_locator = MultipleLocator(0.5)\n ax.yaxis.set_major_locator(ymajor_locator)\n ax.yaxis.set_minor_locator(yminor_locator)\n\n # draw grid\n ax.grid(True, which='minor', linestyle='-')\n\n # draw label\n ax.set_xticks(np.arange(num_classes))\n ax.set_yticks(np.arange(num_classes))\n ax.set_xticklabels(labels)\n ax.set_yticklabels(labels)\n\n ax.tick_params(\n axis='x', bottom=False, top=True, labelbottom=False, labeltop=True)\n plt.setp(\n ax.get_xticklabels(), rotation=45, ha='left', rotation_mode='anchor')\n\n # draw confution matrix value\n for i in range(num_classes):\n for j in range(num_classes):\n ax.text(\n j,\n i,\n '{}%'.format(\n int(confusion_matrix[\n i,\n j]) if not np.isnan(confusion_matrix[i, j]) else -1),\n ha='center',\n va='center',\n color='w',\n size=7)\n\n ax.set_ylim(len(confusion_matrix) - 0.5, -0.5) # matplotlib>3.1.1\n\n fig.tight_layout()\n if save_dir is not None:\n plt.savefig(\n os.path.join(save_dir, 'confusion_matrix.png'), format='png')\n if show:\n plt.show()\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n if args.cfg_options is not None:\n cfg.merge_from_dict(args.cfg_options)\n\n results = mmcv.load(args.prediction_path)\n assert isinstance(results, list)\n if isinstance(results[0], list):\n pass\n elif isinstance(results[0], tuple):\n results = [result[0] for result in results]\n else:\n raise TypeError('invalid type of prediction results')\n\n if isinstance(cfg.data.test, dict):\n cfg.data.test.test_mode = True\n elif isinstance(cfg.data.test, list):\n for ds_cfg in cfg.data.test:\n ds_cfg.test_mode = True\n dataset = build_dataset(cfg.data.test)\n\n confusion_matrix = calculate_confusion_matrix(dataset, results,\n args.score_thr,\n args.nms_iou_thr,\n args.tp_iou_thr)\n plot_confusion_matrix(\n confusion_matrix,\n dataset.CLASSES + ('background', ),\n save_dir=args.save_dir,\n show=args.show)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/analysis_tools/confusion_matrix.py"}]} | 3,352 | 135 |
gh_patches_debug_13358 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reject predict(POST) API call if JSON payload is not properly wrapped by WHEN
**Is your feature request related to a problem? Please describe.**
The MindsDB Predict API returns predictions even when the JSON payload is not properly following the specification.
(https://apidocs.mindsdb.com/#acaf5684-c1bb-4df7-bae0-3a673ac1dd11) .
For instance, the payload should follow this rule but the API also accepts payload without 'when'
```
--data-raw '{
"when": {
"number_of_rooms": 2,
"sqft": 1700
}
}'
```
```
--data-raw '{
"number_of_rooms": 2,
"sqft": 1700
}'
```
In this case, the API cannot recognize input variables in the JSON request, thus return them as 'missing' in the response text. I am not sure if the prediction itself is not reliable in this case, but we might as well block such wrong API requests.
**Describe the solution you'd like**
Return 'error' message if 'predict when' specification is not met
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mindsdb/api/http/namespaces/predictor.py`
Content:
```
1 import os
2 import time
3
4 from dateutil.parser import parse as parse_datetime
5 from flask import request
6 from flask_restx import Resource, abort
7 from flask import current_app as ca
8
9 from mindsdb.utilities.log import log
10 from mindsdb.api.http.utils import http_error
11 from mindsdb.api.http.namespaces.configs.predictors import ns_conf
12 from mindsdb.api.http.namespaces.entitites.predictor_metadata import (
13 predictor_metadata,
14 predictor_query_params,
15 upload_predictor_params,
16 put_predictor_params
17 )
18 from mindsdb.api.http.namespaces.entitites.predictor_status import predictor_status
19
20
21 @ns_conf.route('/')
22 class PredictorList(Resource):
23 @ns_conf.doc('list_predictors')
24 @ns_conf.marshal_list_with(predictor_status, skip_none=True)
25 def get(self):
26 '''List all predictors'''
27 return request.native_interface.get_models()
28
29
30 @ns_conf.route('/custom/<name>')
31 @ns_conf.param('name', 'The predictor identifier')
32 @ns_conf.response(404, 'predictor not found')
33 class CustomPredictor(Resource):
34 @ns_conf.doc('put_custom_predictor')
35 def put(self, name):
36 try:
37 trained_status = request.json['trained_status']
38 except Exception:
39 trained_status = 'untrained'
40
41 predictor_file = request.files['file']
42 fpath = os.path.join(ca.config_obj.paths['tmp'], name + '.zip')
43 with open(fpath, 'wb') as f:
44 f.write(predictor_file.read())
45
46 request.custom_models.load_model(fpath, name, trained_status)
47
48 return f'Uploaded custom model {name}'
49
50
51 @ns_conf.route('/<name>')
52 @ns_conf.param('name', 'The predictor identifier')
53 @ns_conf.response(404, 'predictor not found')
54 class Predictor(Resource):
55 @ns_conf.doc('get_predictor')
56 @ns_conf.marshal_with(predictor_metadata, skip_none=True)
57 def get(self, name):
58 try:
59 model = request.native_interface.get_model_data(name, db_fix=False)
60 except Exception as e:
61 abort(404, "")
62
63 for k in ['train_end_at', 'updated_at', 'created_at']:
64 if k in model and model[k] is not None:
65 model[k] = parse_datetime(model[k])
66
67 return model
68
69 @ns_conf.doc('delete_predictor')
70 def delete(self, name):
71 '''Remove predictor'''
72 request.native_interface.delete_model(name)
73
74 return '', 200
75
76 @ns_conf.doc('put_predictor', params=put_predictor_params)
77 def put(self, name):
78 '''Learning new predictor'''
79 data = request.json
80 to_predict = data.get('to_predict')
81
82 try:
83 kwargs = data.get('kwargs')
84 except Exception:
85 kwargs = None
86
87 if type(kwargs) != type({}):
88 kwargs = {}
89
90 if 'equal_accuracy_for_all_output_categories' not in kwargs:
91 kwargs['equal_accuracy_for_all_output_categories'] = True
92
93 if 'advanced_args' not in kwargs:
94 kwargs['advanced_args'] = {}
95
96 if 'use_selfaware_model' not in kwargs['advanced_args']:
97 kwargs['advanced_args']['use_selfaware_model'] = False
98
99 try:
100 retrain = data.get('retrain')
101 if retrain in ('true', 'True'):
102 retrain = True
103 else:
104 retrain = False
105 except Exception:
106 retrain = None
107
108 ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')
109 from_data = request.default_store.get_datasource_obj(ds_name, raw=True)
110
111 if from_data is None:
112 return {'message': f'Can not find datasource: {ds_name}'}, 400
113
114 if retrain is True:
115 original_name = name
116 name = name + '_retrained'
117
118 model_names = [x['name'] for x in request.native_interface.get_models()]
119 if name in model_names:
120 return http_error(
121 409,
122 f"Predictor '{name}' already exists",
123 f"Predictor with name '{name}' already exists. Each predictor must have unique name."
124 )
125
126 request.native_interface.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs=kwargs)
127 for i in range(20):
128 try:
129 # Dirty hack, we should use a messaging queue between the predictor process and this bit of the code
130 request.native_interface.get_model_data(name)
131 break
132 except Exception:
133 time.sleep(1)
134
135 if retrain is True:
136 try:
137 request.native_interface.delete_model(original_name)
138 request.native_interface.rename_model(name, original_name)
139 except Exception:
140 pass
141
142 return '', 200
143
144
145 @ns_conf.route('/<name>/learn')
146 @ns_conf.param('name', 'The predictor identifier')
147 class PredictorLearn(Resource):
148 def post(self, name):
149 data = request.json
150 to_predict = data.get('to_predict')
151 kwargs = data.get('kwargs', None)
152
153 if not isinstance(kwargs, dict):
154 kwargs = {}
155
156 if 'advanced_args' not in kwargs:
157 kwargs['advanced_args'] = {}
158
159 ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')
160 from_data = request.default_store.get_datasource_obj(ds_name, raw=True)
161
162 request.custom_models.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs)
163
164 return '', 200
165
166
167 @ns_conf.route('/<name>/update')
168 @ns_conf.param('name', 'Update predictor')
169 class PredictorPredict(Resource):
170 @ns_conf.doc('Update predictor')
171 def get(self, name):
172 msg = request.native_interface.update_model(name)
173 return {
174 'message': msg
175 }
176
177
178 @ns_conf.route('/<name>/predict')
179 @ns_conf.param('name', 'The predictor identifier')
180 class PredictorPredict2(Resource):
181 @ns_conf.doc('post_predictor_predict', params=predictor_query_params)
182 def post(self, name):
183 '''Queries predictor'''
184 data = request.json
185 when = data.get('when', {})
186 format_flag = data.get('format_flag', 'explain')
187 kwargs = data.get('kwargs', {})
188
189 if when is None:
190 return 'No data provided for the predictions', 500
191
192 results = request.native_interface.predict(name, format_flag, when_data=when, **kwargs)
193
194 return results
195
196
197 @ns_conf.route('/<name>/predict_datasource')
198 @ns_conf.param('name', 'The predictor identifier')
199 class PredictorPredictFromDataSource(Resource):
200 @ns_conf.doc('post_predictor_predict', params=predictor_query_params)
201 def post(self, name):
202 data = request.json
203 format_flag = data.get('format_flag', 'explain')
204 kwargs = data.get('kwargs', {})
205
206 use_raw = False
207
208 from_data = request.default_store.get_datasource_obj(data.get('data_source_name'), raw=use_raw)
209 if from_data is None:
210 abort(400, 'No valid datasource given')
211
212 results = request.native_interface.predict(name, format_flag, when_data=from_data, **kwargs)
213 return results
214
215
216 @ns_conf.route('/<name>/rename')
217 @ns_conf.param('name', 'The predictor identifier')
218 class PredictorDownload(Resource):
219 @ns_conf.doc('get_predictor_download')
220 def get(self, name):
221 '''Export predictor to file'''
222 try:
223 new_name = request.args.get('new_name')
224 request.native_interface.rename_model(name, new_name)
225 except Exception as e:
226 return str(e), 400
227
228 return f'Renamed model to {new_name}', 200
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mindsdb/api/http/namespaces/predictor.py b/mindsdb/api/http/namespaces/predictor.py
--- a/mindsdb/api/http/namespaces/predictor.py
+++ b/mindsdb/api/http/namespaces/predictor.py
@@ -182,12 +182,12 @@
def post(self, name):
'''Queries predictor'''
data = request.json
- when = data.get('when', {})
+ when = data.get('when')
format_flag = data.get('format_flag', 'explain')
kwargs = data.get('kwargs', {})
- if when is None:
- return 'No data provided for the predictions', 500
+ if isinstance(when, dict) is False or len(when) == 0:
+ return 'No data provided for the predictions', 400
results = request.native_interface.predict(name, format_flag, when_data=when, **kwargs)
| {"golden_diff": "diff --git a/mindsdb/api/http/namespaces/predictor.py b/mindsdb/api/http/namespaces/predictor.py\n--- a/mindsdb/api/http/namespaces/predictor.py\n+++ b/mindsdb/api/http/namespaces/predictor.py\n@@ -182,12 +182,12 @@\n def post(self, name):\n '''Queries predictor'''\n data = request.json\n- when = data.get('when', {})\n+ when = data.get('when')\n format_flag = data.get('format_flag', 'explain')\n kwargs = data.get('kwargs', {})\n \n- if when is None:\n- return 'No data provided for the predictions', 500\n+ if isinstance(when, dict) is False or len(when) == 0:\n+ return 'No data provided for the predictions', 400\n \n results = request.native_interface.predict(name, format_flag, when_data=when, **kwargs)\n", "issue": "Reject predict(POST) API call if JSON payload is not properly wrapped by WHEN \n**Is your feature request related to a problem? Please describe.**\r\nThe MindsDB Predict API returns predictions even when the JSON payload is not properly following the specification.\r\n(https://apidocs.mindsdb.com/#acaf5684-c1bb-4df7-bae0-3a673ac1dd11) . \r\nFor instance, the payload should follow this rule but the API also accepts payload without 'when' \r\n```\r\n--data-raw '{\r\n\t\"when\": {\r\n\t\t\"number_of_rooms\": 2,\r\n\t\t\"sqft\": 1700\r\n\t}\r\n}'\r\n```\r\n```\r\n--data-raw '{\r\n\t\t\"number_of_rooms\": 2,\r\n\t\t\"sqft\": 1700\r\n}'\r\n```\r\nIn this case, the API cannot recognize input variables in the JSON request, thus return them as 'missing' in the response text. I am not sure if the prediction itself is not reliable in this case, but we might as well block such wrong API requests. \r\n\r\n**Describe the solution you'd like**\r\nReturn 'error' message if 'predict when' specification is not met \r\n\r\n\n", "before_files": [{"content": "import os\nimport time\n\nfrom dateutil.parser import parse as parse_datetime\nfrom flask import request\nfrom flask_restx import Resource, abort\nfrom flask import current_app as ca\n\nfrom mindsdb.utilities.log import log\nfrom mindsdb.api.http.utils import http_error\nfrom mindsdb.api.http.namespaces.configs.predictors import ns_conf\nfrom mindsdb.api.http.namespaces.entitites.predictor_metadata import (\n predictor_metadata,\n predictor_query_params,\n upload_predictor_params,\n put_predictor_params\n)\nfrom mindsdb.api.http.namespaces.entitites.predictor_status import predictor_status\n\n\n@ns_conf.route('/')\nclass PredictorList(Resource):\n @ns_conf.doc('list_predictors')\n @ns_conf.marshal_list_with(predictor_status, skip_none=True)\n def get(self):\n '''List all predictors'''\n return request.native_interface.get_models()\n\n\n@ns_conf.route('/custom/<name>')\n@ns_conf.param('name', 'The predictor identifier')\n@ns_conf.response(404, 'predictor not found')\nclass CustomPredictor(Resource):\n @ns_conf.doc('put_custom_predictor')\n def put(self, name):\n try:\n trained_status = request.json['trained_status']\n except Exception:\n trained_status = 'untrained'\n\n predictor_file = request.files['file']\n fpath = os.path.join(ca.config_obj.paths['tmp'], name + '.zip')\n with open(fpath, 'wb') as f:\n f.write(predictor_file.read())\n\n request.custom_models.load_model(fpath, name, trained_status)\n\n return f'Uploaded custom model {name}'\n\n\n@ns_conf.route('/<name>')\n@ns_conf.param('name', 'The predictor identifier')\n@ns_conf.response(404, 'predictor not found')\nclass Predictor(Resource):\n @ns_conf.doc('get_predictor')\n @ns_conf.marshal_with(predictor_metadata, skip_none=True)\n def get(self, name):\n try:\n model = request.native_interface.get_model_data(name, db_fix=False)\n except Exception as e:\n abort(404, \"\")\n\n for k in ['train_end_at', 'updated_at', 'created_at']:\n if k in model and model[k] is not None:\n model[k] = parse_datetime(model[k])\n\n return model\n\n @ns_conf.doc('delete_predictor')\n def delete(self, name):\n '''Remove predictor'''\n request.native_interface.delete_model(name)\n\n return '', 200\n\n @ns_conf.doc('put_predictor', params=put_predictor_params)\n def put(self, name):\n '''Learning new predictor'''\n data = request.json\n to_predict = data.get('to_predict')\n\n try:\n kwargs = data.get('kwargs')\n except Exception:\n kwargs = None\n\n if type(kwargs) != type({}):\n kwargs = {}\n\n if 'equal_accuracy_for_all_output_categories' not in kwargs:\n kwargs['equal_accuracy_for_all_output_categories'] = True\n\n if 'advanced_args' not in kwargs:\n kwargs['advanced_args'] = {}\n\n if 'use_selfaware_model' not in kwargs['advanced_args']:\n kwargs['advanced_args']['use_selfaware_model'] = False\n\n try:\n retrain = data.get('retrain')\n if retrain in ('true', 'True'):\n retrain = True\n else:\n retrain = False\n except Exception:\n retrain = None\n\n ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')\n from_data = request.default_store.get_datasource_obj(ds_name, raw=True)\n\n if from_data is None:\n return {'message': f'Can not find datasource: {ds_name}'}, 400\n\n if retrain is True:\n original_name = name\n name = name + '_retrained'\n\n model_names = [x['name'] for x in request.native_interface.get_models()]\n if name in model_names:\n return http_error(\n 409,\n f\"Predictor '{name}' already exists\",\n f\"Predictor with name '{name}' already exists. Each predictor must have unique name.\"\n )\n\n request.native_interface.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs=kwargs)\n for i in range(20):\n try:\n # Dirty hack, we should use a messaging queue between the predictor process and this bit of the code\n request.native_interface.get_model_data(name)\n break\n except Exception:\n time.sleep(1)\n\n if retrain is True:\n try:\n request.native_interface.delete_model(original_name)\n request.native_interface.rename_model(name, original_name)\n except Exception:\n pass\n\n return '', 200\n\n\n@ns_conf.route('/<name>/learn')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorLearn(Resource):\n def post(self, name):\n data = request.json\n to_predict = data.get('to_predict')\n kwargs = data.get('kwargs', None)\n\n if not isinstance(kwargs, dict):\n kwargs = {}\n\n if 'advanced_args' not in kwargs:\n kwargs['advanced_args'] = {}\n\n ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')\n from_data = request.default_store.get_datasource_obj(ds_name, raw=True)\n\n request.custom_models.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs)\n\n return '', 200\n\n\n@ns_conf.route('/<name>/update')\n@ns_conf.param('name', 'Update predictor')\nclass PredictorPredict(Resource):\n @ns_conf.doc('Update predictor')\n def get(self, name):\n msg = request.native_interface.update_model(name)\n return {\n 'message': msg\n }\n\n\n@ns_conf.route('/<name>/predict')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorPredict2(Resource):\n @ns_conf.doc('post_predictor_predict', params=predictor_query_params)\n def post(self, name):\n '''Queries predictor'''\n data = request.json\n when = data.get('when', {})\n format_flag = data.get('format_flag', 'explain')\n kwargs = data.get('kwargs', {})\n\n if when is None:\n return 'No data provided for the predictions', 500\n\n results = request.native_interface.predict(name, format_flag, when_data=when, **kwargs)\n\n return results\n\n\n@ns_conf.route('/<name>/predict_datasource')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorPredictFromDataSource(Resource):\n @ns_conf.doc('post_predictor_predict', params=predictor_query_params)\n def post(self, name):\n data = request.json\n format_flag = data.get('format_flag', 'explain')\n kwargs = data.get('kwargs', {})\n\n use_raw = False\n\n from_data = request.default_store.get_datasource_obj(data.get('data_source_name'), raw=use_raw)\n if from_data is None:\n abort(400, 'No valid datasource given')\n\n results = request.native_interface.predict(name, format_flag, when_data=from_data, **kwargs)\n return results\n\n\n@ns_conf.route('/<name>/rename')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorDownload(Resource):\n @ns_conf.doc('get_predictor_download')\n def get(self, name):\n '''Export predictor to file'''\n try:\n new_name = request.args.get('new_name')\n request.native_interface.rename_model(name, new_name)\n except Exception as e:\n return str(e), 400\n\n return f'Renamed model to {new_name}', 200\n", "path": "mindsdb/api/http/namespaces/predictor.py"}], "after_files": [{"content": "import os\nimport time\n\nfrom dateutil.parser import parse as parse_datetime\nfrom flask import request\nfrom flask_restx import Resource, abort\nfrom flask import current_app as ca\n\nfrom mindsdb.utilities.log import log\nfrom mindsdb.api.http.utils import http_error\nfrom mindsdb.api.http.namespaces.configs.predictors import ns_conf\nfrom mindsdb.api.http.namespaces.entitites.predictor_metadata import (\n predictor_metadata,\n predictor_query_params,\n upload_predictor_params,\n put_predictor_params\n)\nfrom mindsdb.api.http.namespaces.entitites.predictor_status import predictor_status\n\n\n@ns_conf.route('/')\nclass PredictorList(Resource):\n @ns_conf.doc('list_predictors')\n @ns_conf.marshal_list_with(predictor_status, skip_none=True)\n def get(self):\n '''List all predictors'''\n return request.native_interface.get_models()\n\n\n@ns_conf.route('/custom/<name>')\n@ns_conf.param('name', 'The predictor identifier')\n@ns_conf.response(404, 'predictor not found')\nclass CustomPredictor(Resource):\n @ns_conf.doc('put_custom_predictor')\n def put(self, name):\n try:\n trained_status = request.json['trained_status']\n except Exception:\n trained_status = 'untrained'\n\n predictor_file = request.files['file']\n fpath = os.path.join(ca.config_obj.paths['tmp'], name + '.zip')\n with open(fpath, 'wb') as f:\n f.write(predictor_file.read())\n\n request.custom_models.load_model(fpath, name, trained_status)\n\n return f'Uploaded custom model {name}'\n\n\n@ns_conf.route('/<name>')\n@ns_conf.param('name', 'The predictor identifier')\n@ns_conf.response(404, 'predictor not found')\nclass Predictor(Resource):\n @ns_conf.doc('get_predictor')\n @ns_conf.marshal_with(predictor_metadata, skip_none=True)\n def get(self, name):\n try:\n model = request.native_interface.get_model_data(name, db_fix=False)\n except Exception as e:\n abort(404, \"\")\n\n for k in ['train_end_at', 'updated_at', 'created_at']:\n if k in model and model[k] is not None:\n model[k] = parse_datetime(model[k])\n\n return model\n\n @ns_conf.doc('delete_predictor')\n def delete(self, name):\n '''Remove predictor'''\n request.native_interface.delete_model(name)\n\n return '', 200\n\n @ns_conf.doc('put_predictor', params=put_predictor_params)\n def put(self, name):\n '''Learning new predictor'''\n data = request.json\n to_predict = data.get('to_predict')\n\n try:\n kwargs = data.get('kwargs')\n except Exception:\n kwargs = None\n\n if type(kwargs) != type({}):\n kwargs = {}\n\n if 'equal_accuracy_for_all_output_categories' not in kwargs:\n kwargs['equal_accuracy_for_all_output_categories'] = True\n\n if 'advanced_args' not in kwargs:\n kwargs['advanced_args'] = {}\n\n if 'use_selfaware_model' not in kwargs['advanced_args']:\n kwargs['advanced_args']['use_selfaware_model'] = False\n\n try:\n retrain = data.get('retrain')\n if retrain in ('true', 'True'):\n retrain = True\n else:\n retrain = False\n except Exception:\n retrain = None\n\n ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')\n from_data = request.default_store.get_datasource_obj(ds_name, raw=True)\n\n if from_data is None:\n return {'message': f'Can not find datasource: {ds_name}'}, 400\n\n if retrain is True:\n original_name = name\n name = name + '_retrained'\n\n model_names = [x['name'] for x in request.native_interface.get_models()]\n if name in model_names:\n return http_error(\n 409,\n f\"Predictor '{name}' already exists\",\n f\"Predictor with name '{name}' already exists. Each predictor must have unique name.\"\n )\n\n request.native_interface.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs=kwargs)\n for i in range(20):\n try:\n # Dirty hack, we should use a messaging queue between the predictor process and this bit of the code\n request.native_interface.get_model_data(name)\n break\n except Exception:\n time.sleep(1)\n\n if retrain is True:\n try:\n request.native_interface.delete_model(original_name)\n request.native_interface.rename_model(name, original_name)\n except Exception:\n pass\n\n return '', 200\n\n\n@ns_conf.route('/<name>/learn')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorLearn(Resource):\n def post(self, name):\n data = request.json\n to_predict = data.get('to_predict')\n kwargs = data.get('kwargs', None)\n\n if not isinstance(kwargs, dict):\n kwargs = {}\n\n if 'advanced_args' not in kwargs:\n kwargs['advanced_args'] = {}\n\n ds_name = data.get('data_source_name') if data.get('data_source_name') is not None else data.get('from_data')\n from_data = request.default_store.get_datasource_obj(ds_name, raw=True)\n\n request.custom_models.learn(name, from_data, to_predict, request.default_store.get_datasource(ds_name)['id'], kwargs)\n\n return '', 200\n\n\n@ns_conf.route('/<name>/update')\n@ns_conf.param('name', 'Update predictor')\nclass PredictorPredict(Resource):\n @ns_conf.doc('Update predictor')\n def get(self, name):\n msg = request.native_interface.update_model(name)\n return {\n 'message': msg\n }\n\n\n@ns_conf.route('/<name>/predict')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorPredict2(Resource):\n @ns_conf.doc('post_predictor_predict', params=predictor_query_params)\n def post(self, name):\n '''Queries predictor'''\n data = request.json\n when = data.get('when')\n format_flag = data.get('format_flag', 'explain')\n kwargs = data.get('kwargs', {})\n\n if isinstance(when, dict) is False or len(when) == 0:\n return 'No data provided for the predictions', 400\n\n results = request.native_interface.predict(name, format_flag, when_data=when, **kwargs)\n\n return results\n\n\n@ns_conf.route('/<name>/predict_datasource')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorPredictFromDataSource(Resource):\n @ns_conf.doc('post_predictor_predict', params=predictor_query_params)\n def post(self, name):\n data = request.json\n format_flag = data.get('format_flag', 'explain')\n kwargs = data.get('kwargs', {})\n\n use_raw = False\n\n from_data = request.default_store.get_datasource_obj(data.get('data_source_name'), raw=use_raw)\n if from_data is None:\n abort(400, 'No valid datasource given')\n\n results = request.native_interface.predict(name, format_flag, when_data=from_data, **kwargs)\n return results\n\n\n@ns_conf.route('/<name>/rename')\n@ns_conf.param('name', 'The predictor identifier')\nclass PredictorDownload(Resource):\n @ns_conf.doc('get_predictor_download')\n def get(self, name):\n '''Export predictor to file'''\n try:\n new_name = request.args.get('new_name')\n request.native_interface.rename_model(name, new_name)\n except Exception as e:\n return str(e), 400\n\n return f'Renamed model to {new_name}', 200\n", "path": "mindsdb/api/http/namespaces/predictor.py"}]} | 2,821 | 215 |
gh_patches_debug_28517 | rasdani/github-patches | git_diff | Parsl__parsl-686 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SSHChannel fails with host-based authentication
Systems using host-based authentication (without a key or a password) fail with:
```
paramiko.ssh_exception.SSHException: No authentication methods available
```
Reported by @jmoon1506
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsl/channels/ssh/ssh.py`
Content:
```
1 import errno
2 import logging
3 import os
4
5 import paramiko
6 from parsl.channels.base import Channel
7 from parsl.channels.errors import *
8 from parsl.utils import RepresentationMixin
9
10 logger = logging.getLogger(__name__)
11
12
13 class SSHChannel(Channel, RepresentationMixin):
14 ''' SSH persistent channel. This enables remote execution on sites
15 accessible via ssh. It is assumed that the user has setup host keys
16 so as to ssh to the remote host. Which goes to say that the following
17 test on the commandline should work :
18
19 >>> ssh <username>@<hostname>
20
21 '''
22
23 def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, **kwargs):
24 ''' Initialize a persistent connection to the remote system.
25 We should know at this point whether ssh connectivity is possible
26
27 Args:
28 - hostname (String) : Hostname
29
30 KWargs:
31 - username (string) : Username on remote system
32 - password (string) : Password for remote system
33 - script_dir (string) : Full path to a script dir where
34 generated scripts could be sent to.
35 - envs (dict) : A dictionary of environment variables to be set when executing commands
36
37 Raises:
38 '''
39
40 self.hostname = hostname
41 self.username = username
42 self.password = password
43 self.kwargs = kwargs
44 self.script_dir = script_dir
45
46 self.ssh_client = paramiko.SSHClient()
47 self.ssh_client.load_system_host_keys()
48 self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
49
50 self.envs = {}
51 if envs is not None:
52 self.envs = envs
53
54 try:
55 self.ssh_client.connect(
56 hostname,
57 username=username,
58 password=password,
59 allow_agent=True
60 )
61 t = self.ssh_client.get_transport()
62 self.sftp_client = paramiko.SFTPClient.from_transport(t)
63
64 except paramiko.BadHostKeyException as e:
65 raise BadHostKeyException(e, self.hostname)
66
67 except paramiko.AuthenticationException as e:
68 raise AuthException(e, self.hostname)
69
70 except paramiko.SSHException as e:
71 raise SSHException(e, self.hostname)
72
73 except Exception as e:
74 raise SSHException(e, self.hostname)
75
76 def prepend_envs(self, cmd, env={}):
77 env.update(self.envs)
78
79 if len(env.keys()) > 0:
80 env_vars = ' '.join(['{}={}'.format(key, value) for key, value in env.items()])
81 return 'env {0} {1}'.format(env_vars, cmd)
82 return cmd
83
84 def execute_wait(self, cmd, walltime=2, envs={}):
85 ''' Synchronously execute a commandline string on the shell.
86
87 Args:
88 - cmd (string) : Commandline string to execute
89 - walltime (int) : walltime in seconds, this is not really used now.
90
91 Kwargs:
92 - envs (dict) : Dictionary of env variables
93
94 Returns:
95 - retcode : Return code from the execution, -1 on fail
96 - stdout : stdout string
97 - stderr : stderr string
98
99 Raises:
100 None.
101 '''
102
103 # Execute the command
104 stdin, stdout, stderr = self.ssh_client.exec_command(
105 self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime
106 )
107 # Block on exit status from the command
108 exit_status = stdout.channel.recv_exit_status()
109 return exit_status, stdout.read().decode("utf-8"), stderr.read().decode("utf-8")
110
111 def execute_no_wait(self, cmd, walltime=2, envs={}):
112 ''' Execute asynchronousely without waiting for exitcode
113
114 Args:
115 - cmd (string): Commandline string to be executed on the remote side
116 - walltime (int): timeout to exec_command
117
118 KWargs:
119 - envs (dict): A dictionary of env variables
120
121 Returns:
122 - None, stdout (readable stream), stderr (readable stream)
123
124 Raises:
125 - ChannelExecFailed (reason)
126 '''
127
128 # Execute the command
129 stdin, stdout, stderr = self.ssh_client.exec_command(
130 self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime
131 )
132 # Block on exit status from the command
133 return None, stdout, stderr
134
135 def push_file(self, local_source, remote_dir):
136 ''' Transport a local file to a directory on a remote machine
137
138 Args:
139 - local_source (string): Path
140 - remote_dir (string): Remote path
141
142 Returns:
143 - str: Path to copied file on remote machine
144
145 Raises:
146 - BadScriptPath : if script path on the remote side is bad
147 - BadPermsScriptPath : You do not have perms to make the channel script dir
148 - FileCopyException : FileCopy failed.
149
150 '''
151 remote_dest = remote_dir + '/' + os.path.basename(local_source)
152
153 try:
154 self.makedirs(remote_dir, exist_ok=True)
155 except IOError as e:
156 logger.exception("Pushing {0} to {1} failed".format(local_source, remote_dir))
157 if e.errno == 2:
158 raise BadScriptPath(e, self.hostname)
159 elif e.errno == 13:
160 raise BadPermsScriptPath(e, self.hostname)
161 else:
162 logger.exception("File push failed due to SFTP client failure")
163 raise FileCopyException(e, self.hostname)
164 try:
165 self.sftp_client.put(local_source, remote_dest, confirm=True)
166 # Set perm because some systems require the script to be executable
167 self.sftp_client.chmod(remote_dest, 0o777)
168 except Exception as e:
169 logger.exception("File push from local source {} to remote destination {} failed".format(
170 local_source, remote_dest))
171 raise FileCopyException(e, self.hostname)
172
173 return remote_dest
174
175 def pull_file(self, remote_source, local_dir):
176 ''' Transport file on the remote side to a local directory
177
178 Args:
179 - remote_source (string): remote_source
180 - local_dir (string): Local directory to copy to
181
182
183 Returns:
184 - str: Local path to file
185
186 Raises:
187 - FileExists : Name collision at local directory.
188 - FileCopyException : FileCopy failed.
189 '''
190
191 local_dest = local_dir + '/' + os.path.basename(remote_source)
192
193 try:
194 os.makedirs(local_dir)
195 except OSError as e:
196 if e.errno != errno.EEXIST:
197 logger.exception("Failed to create script_dir: {0}".format(script_dir))
198 raise BadScriptPath(e, self.hostname)
199
200 # Easier to check this than to waste time trying to pull file and
201 # realize there's a problem.
202 if os.path.exists(local_dest):
203 logger.exception("Remote file copy will overwrite a local file:{0}".format(local_dest))
204 raise FileExists(None, self.hostname, filename=local_dest)
205
206 try:
207 self.sftp_client.get(remote_source, local_dest)
208 except Exception as e:
209 logger.exception("File pull failed")
210 raise FileCopyException(e, self.hostname)
211
212 return local_dest
213
214 def close(self):
215 return self.ssh_client.close()
216
217 def isdir(self, path):
218 """Return true if the path refers to an existing directory.
219
220 Parameters
221 ----------
222 path : str
223 Path of directory on the remote side to check.
224 """
225 result = True
226 try:
227 self.sftp_client.lstat(path)
228 except FileNotFoundError:
229 result = False
230
231 return result
232
233 def makedirs(self, path, mode=511, exist_ok=False):
234 """Create a directory on the remote side.
235
236 If intermediate directories do not exist, they will be created.
237
238 Parameters
239 ----------
240 path : str
241 Path of directory on the remote side to create.
242 mode : int
243 Permissions (posix-style) for the newly-created directory.
244 exist_ok : bool
245 If False, raise an OSError if the target directory already exists.
246 """
247 if exist_ok is False and self.isdir(path):
248 raise OSError('Target directory {} already exists'.format(path))
249
250 self.execute_wait('mkdir -p {}'.format(path))
251 self.sftp_client.chmod(path, mode)
252
253 def abspath(self, path):
254 """Return the absolute path on the remote side.
255
256 Parameters
257 ----------
258 path : str
259 Path for which the absolute path will be returned.
260 """
261 return self.sftp_client.normalize(path)
262
263 @property
264 def script_dir(self):
265 return self._script_dir
266
267 @script_dir.setter
268 def script_dir(self, value):
269 self._script_dir = value
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsl/channels/ssh/ssh.py b/parsl/channels/ssh/ssh.py
--- a/parsl/channels/ssh/ssh.py
+++ b/parsl/channels/ssh/ssh.py
@@ -10,6 +10,12 @@
logger = logging.getLogger(__name__)
+class HostAuthSSHClient(paramiko.SSHClient):
+ def _auth(self, username, *args):
+ self._transport.auth_none(username)
+ return
+
+
class SSHChannel(Channel, RepresentationMixin):
''' SSH persistent channel. This enables remote execution on sites
accessible via ssh. It is assumed that the user has setup host keys
@@ -20,7 +26,7 @@
'''
- def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, **kwargs):
+ def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, host_auth=False, **kwargs):
''' Initialize a persistent connection to the remote system.
We should know at this point whether ssh connectivity is possible
@@ -42,8 +48,12 @@
self.password = password
self.kwargs = kwargs
self.script_dir = script_dir
+ self.host_auth = host_auth
- self.ssh_client = paramiko.SSHClient()
+ if host_auth:
+ self.ssh_client = HostAuthSSHClient()
+ else:
+ self.ssh_client = paramiko.SSHClient()
self.ssh_client.load_system_host_keys()
self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
| {"golden_diff": "diff --git a/parsl/channels/ssh/ssh.py b/parsl/channels/ssh/ssh.py\n--- a/parsl/channels/ssh/ssh.py\n+++ b/parsl/channels/ssh/ssh.py\n@@ -10,6 +10,12 @@\n logger = logging.getLogger(__name__)\n \n \n+class HostAuthSSHClient(paramiko.SSHClient):\n+ def _auth(self, username, *args):\n+ self._transport.auth_none(username)\n+ return\n+\n+\n class SSHChannel(Channel, RepresentationMixin):\n ''' SSH persistent channel. This enables remote execution on sites\n accessible via ssh. It is assumed that the user has setup host keys\n@@ -20,7 +26,7 @@\n \n '''\n \n- def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, **kwargs):\n+ def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, host_auth=False, **kwargs):\n ''' Initialize a persistent connection to the remote system.\n We should know at this point whether ssh connectivity is possible\n \n@@ -42,8 +48,12 @@\n self.password = password\n self.kwargs = kwargs\n self.script_dir = script_dir\n+ self.host_auth = host_auth\n \n- self.ssh_client = paramiko.SSHClient()\n+ if host_auth:\n+ self.ssh_client = HostAuthSSHClient()\n+ else:\n+ self.ssh_client = paramiko.SSHClient()\n self.ssh_client.load_system_host_keys()\n self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n", "issue": "SSHChannel fails with host-based authentication\nSystems using host-based authentication (without a key or a password) fail with:\r\n```\r\nparamiko.ssh_exception.SSHException: No authentication methods available\r\n```\r\n\r\nReported by @jmoon1506\n", "before_files": [{"content": "import errno\nimport logging\nimport os\n\nimport paramiko\nfrom parsl.channels.base import Channel\nfrom parsl.channels.errors import *\nfrom parsl.utils import RepresentationMixin\n\nlogger = logging.getLogger(__name__)\n\n\nclass SSHChannel(Channel, RepresentationMixin):\n ''' SSH persistent channel. This enables remote execution on sites\n accessible via ssh. It is assumed that the user has setup host keys\n so as to ssh to the remote host. Which goes to say that the following\n test on the commandline should work :\n\n >>> ssh <username>@<hostname>\n\n '''\n\n def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, **kwargs):\n ''' Initialize a persistent connection to the remote system.\n We should know at this point whether ssh connectivity is possible\n\n Args:\n - hostname (String) : Hostname\n\n KWargs:\n - username (string) : Username on remote system\n - password (string) : Password for remote system\n - script_dir (string) : Full path to a script dir where\n generated scripts could be sent to.\n - envs (dict) : A dictionary of environment variables to be set when executing commands\n\n Raises:\n '''\n\n self.hostname = hostname\n self.username = username\n self.password = password\n self.kwargs = kwargs\n self.script_dir = script_dir\n\n self.ssh_client = paramiko.SSHClient()\n self.ssh_client.load_system_host_keys()\n self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\n self.envs = {}\n if envs is not None:\n self.envs = envs\n\n try:\n self.ssh_client.connect(\n hostname,\n username=username,\n password=password,\n allow_agent=True\n )\n t = self.ssh_client.get_transport()\n self.sftp_client = paramiko.SFTPClient.from_transport(t)\n\n except paramiko.BadHostKeyException as e:\n raise BadHostKeyException(e, self.hostname)\n\n except paramiko.AuthenticationException as e:\n raise AuthException(e, self.hostname)\n\n except paramiko.SSHException as e:\n raise SSHException(e, self.hostname)\n\n except Exception as e:\n raise SSHException(e, self.hostname)\n\n def prepend_envs(self, cmd, env={}):\n env.update(self.envs)\n\n if len(env.keys()) > 0:\n env_vars = ' '.join(['{}={}'.format(key, value) for key, value in env.items()])\n return 'env {0} {1}'.format(env_vars, cmd)\n return cmd\n\n def execute_wait(self, cmd, walltime=2, envs={}):\n ''' Synchronously execute a commandline string on the shell.\n\n Args:\n - cmd (string) : Commandline string to execute\n - walltime (int) : walltime in seconds, this is not really used now.\n\n Kwargs:\n - envs (dict) : Dictionary of env variables\n\n Returns:\n - retcode : Return code from the execution, -1 on fail\n - stdout : stdout string\n - stderr : stderr string\n\n Raises:\n None.\n '''\n\n # Execute the command\n stdin, stdout, stderr = self.ssh_client.exec_command(\n self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime\n )\n # Block on exit status from the command\n exit_status = stdout.channel.recv_exit_status()\n return exit_status, stdout.read().decode(\"utf-8\"), stderr.read().decode(\"utf-8\")\n\n def execute_no_wait(self, cmd, walltime=2, envs={}):\n ''' Execute asynchronousely without waiting for exitcode\n\n Args:\n - cmd (string): Commandline string to be executed on the remote side\n - walltime (int): timeout to exec_command\n\n KWargs:\n - envs (dict): A dictionary of env variables\n\n Returns:\n - None, stdout (readable stream), stderr (readable stream)\n\n Raises:\n - ChannelExecFailed (reason)\n '''\n\n # Execute the command\n stdin, stdout, stderr = self.ssh_client.exec_command(\n self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime\n )\n # Block on exit status from the command\n return None, stdout, stderr\n\n def push_file(self, local_source, remote_dir):\n ''' Transport a local file to a directory on a remote machine\n\n Args:\n - local_source (string): Path\n - remote_dir (string): Remote path\n\n Returns:\n - str: Path to copied file on remote machine\n\n Raises:\n - BadScriptPath : if script path on the remote side is bad\n - BadPermsScriptPath : You do not have perms to make the channel script dir\n - FileCopyException : FileCopy failed.\n\n '''\n remote_dest = remote_dir + '/' + os.path.basename(local_source)\n\n try:\n self.makedirs(remote_dir, exist_ok=True)\n except IOError as e:\n logger.exception(\"Pushing {0} to {1} failed\".format(local_source, remote_dir))\n if e.errno == 2:\n raise BadScriptPath(e, self.hostname)\n elif e.errno == 13:\n raise BadPermsScriptPath(e, self.hostname)\n else:\n logger.exception(\"File push failed due to SFTP client failure\")\n raise FileCopyException(e, self.hostname)\n try:\n self.sftp_client.put(local_source, remote_dest, confirm=True)\n # Set perm because some systems require the script to be executable\n self.sftp_client.chmod(remote_dest, 0o777)\n except Exception as e:\n logger.exception(\"File push from local source {} to remote destination {} failed\".format(\n local_source, remote_dest))\n raise FileCopyException(e, self.hostname)\n\n return remote_dest\n\n def pull_file(self, remote_source, local_dir):\n ''' Transport file on the remote side to a local directory\n\n Args:\n - remote_source (string): remote_source\n - local_dir (string): Local directory to copy to\n\n\n Returns:\n - str: Local path to file\n\n Raises:\n - FileExists : Name collision at local directory.\n - FileCopyException : FileCopy failed.\n '''\n\n local_dest = local_dir + '/' + os.path.basename(remote_source)\n\n try:\n os.makedirs(local_dir)\n except OSError as e:\n if e.errno != errno.EEXIST:\n logger.exception(\"Failed to create script_dir: {0}\".format(script_dir))\n raise BadScriptPath(e, self.hostname)\n\n # Easier to check this than to waste time trying to pull file and\n # realize there's a problem.\n if os.path.exists(local_dest):\n logger.exception(\"Remote file copy will overwrite a local file:{0}\".format(local_dest))\n raise FileExists(None, self.hostname, filename=local_dest)\n\n try:\n self.sftp_client.get(remote_source, local_dest)\n except Exception as e:\n logger.exception(\"File pull failed\")\n raise FileCopyException(e, self.hostname)\n\n return local_dest\n\n def close(self):\n return self.ssh_client.close()\n\n def isdir(self, path):\n \"\"\"Return true if the path refers to an existing directory.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to check.\n \"\"\"\n result = True\n try:\n self.sftp_client.lstat(path)\n except FileNotFoundError:\n result = False\n\n return result\n\n def makedirs(self, path, mode=511, exist_ok=False):\n \"\"\"Create a directory on the remote side.\n\n If intermediate directories do not exist, they will be created.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to create.\n mode : int\n Permissions (posix-style) for the newly-created directory.\n exist_ok : bool\n If False, raise an OSError if the target directory already exists.\n \"\"\"\n if exist_ok is False and self.isdir(path):\n raise OSError('Target directory {} already exists'.format(path))\n\n self.execute_wait('mkdir -p {}'.format(path))\n self.sftp_client.chmod(path, mode)\n\n def abspath(self, path):\n \"\"\"Return the absolute path on the remote side.\n\n Parameters\n ----------\n path : str\n Path for which the absolute path will be returned.\n \"\"\"\n return self.sftp_client.normalize(path)\n\n @property\n def script_dir(self):\n return self._script_dir\n\n @script_dir.setter\n def script_dir(self, value):\n self._script_dir = value\n", "path": "parsl/channels/ssh/ssh.py"}], "after_files": [{"content": "import errno\nimport logging\nimport os\n\nimport paramiko\nfrom parsl.channels.base import Channel\nfrom parsl.channels.errors import *\nfrom parsl.utils import RepresentationMixin\n\nlogger = logging.getLogger(__name__)\n\n\nclass HostAuthSSHClient(paramiko.SSHClient):\n def _auth(self, username, *args):\n self._transport.auth_none(username)\n return\n\n\nclass SSHChannel(Channel, RepresentationMixin):\n ''' SSH persistent channel. This enables remote execution on sites\n accessible via ssh. It is assumed that the user has setup host keys\n so as to ssh to the remote host. Which goes to say that the following\n test on the commandline should work :\n\n >>> ssh <username>@<hostname>\n\n '''\n\n def __init__(self, hostname, username=None, password=None, script_dir=None, envs=None, host_auth=False, **kwargs):\n ''' Initialize a persistent connection to the remote system.\n We should know at this point whether ssh connectivity is possible\n\n Args:\n - hostname (String) : Hostname\n\n KWargs:\n - username (string) : Username on remote system\n - password (string) : Password for remote system\n - script_dir (string) : Full path to a script dir where\n generated scripts could be sent to.\n - envs (dict) : A dictionary of environment variables to be set when executing commands\n\n Raises:\n '''\n\n self.hostname = hostname\n self.username = username\n self.password = password\n self.kwargs = kwargs\n self.script_dir = script_dir\n self.host_auth = host_auth\n\n if host_auth:\n self.ssh_client = HostAuthSSHClient()\n else:\n self.ssh_client = paramiko.SSHClient()\n self.ssh_client.load_system_host_keys()\n self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n\n self.envs = {}\n if envs is not None:\n self.envs = envs\n\n try:\n self.ssh_client.connect(\n hostname,\n username=username,\n password=password,\n allow_agent=True\n )\n t = self.ssh_client.get_transport()\n self.sftp_client = paramiko.SFTPClient.from_transport(t)\n\n except paramiko.BadHostKeyException as e:\n raise BadHostKeyException(e, self.hostname)\n\n except paramiko.AuthenticationException as e:\n raise AuthException(e, self.hostname)\n\n except paramiko.SSHException as e:\n raise SSHException(e, self.hostname)\n\n except Exception as e:\n raise SSHException(e, self.hostname)\n\n def prepend_envs(self, cmd, env={}):\n env.update(self.envs)\n\n if len(env.keys()) > 0:\n env_vars = ' '.join(['{}={}'.format(key, value) for key, value in env.items()])\n return 'env {0} {1}'.format(env_vars, cmd)\n return cmd\n\n def execute_wait(self, cmd, walltime=2, envs={}):\n ''' Synchronously execute a commandline string on the shell.\n\n Args:\n - cmd (string) : Commandline string to execute\n - walltime (int) : walltime in seconds, this is not really used now.\n\n Kwargs:\n - envs (dict) : Dictionary of env variables\n\n Returns:\n - retcode : Return code from the execution, -1 on fail\n - stdout : stdout string\n - stderr : stderr string\n\n Raises:\n None.\n '''\n\n # Execute the command\n stdin, stdout, stderr = self.ssh_client.exec_command(\n self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime\n )\n # Block on exit status from the command\n exit_status = stdout.channel.recv_exit_status()\n return exit_status, stdout.read().decode(\"utf-8\"), stderr.read().decode(\"utf-8\")\n\n def execute_no_wait(self, cmd, walltime=2, envs={}):\n ''' Execute asynchronousely without waiting for exitcode\n\n Args:\n - cmd (string): Commandline string to be executed on the remote side\n - walltime (int): timeout to exec_command\n\n KWargs:\n - envs (dict): A dictionary of env variables\n\n Returns:\n - None, stdout (readable stream), stderr (readable stream)\n\n Raises:\n - ChannelExecFailed (reason)\n '''\n\n # Execute the command\n stdin, stdout, stderr = self.ssh_client.exec_command(\n self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime\n )\n # Block on exit status from the command\n return None, stdout, stderr\n\n def push_file(self, local_source, remote_dir):\n ''' Transport a local file to a directory on a remote machine\n\n Args:\n - local_source (string): Path\n - remote_dir (string): Remote path\n\n Returns:\n - str: Path to copied file on remote machine\n\n Raises:\n - BadScriptPath : if script path on the remote side is bad\n - BadPermsScriptPath : You do not have perms to make the channel script dir\n - FileCopyException : FileCopy failed.\n\n '''\n remote_dest = remote_dir + '/' + os.path.basename(local_source)\n\n try:\n self.makedirs(remote_dir, exist_ok=True)\n except IOError as e:\n logger.exception(\"Pushing {0} to {1} failed\".format(local_source, remote_dir))\n if e.errno == 2:\n raise BadScriptPath(e, self.hostname)\n elif e.errno == 13:\n raise BadPermsScriptPath(e, self.hostname)\n else:\n logger.exception(\"File push failed due to SFTP client failure\")\n raise FileCopyException(e, self.hostname)\n try:\n self.sftp_client.put(local_source, remote_dest, confirm=True)\n # Set perm because some systems require the script to be executable\n self.sftp_client.chmod(remote_dest, 0o777)\n except Exception as e:\n logger.exception(\"File push from local source {} to remote destination {} failed\".format(\n local_source, remote_dest))\n raise FileCopyException(e, self.hostname)\n\n return remote_dest\n\n def pull_file(self, remote_source, local_dir):\n ''' Transport file on the remote side to a local directory\n\n Args:\n - remote_source (string): remote_source\n - local_dir (string): Local directory to copy to\n\n\n Returns:\n - str: Local path to file\n\n Raises:\n - FileExists : Name collision at local directory.\n - FileCopyException : FileCopy failed.\n '''\n\n local_dest = local_dir + '/' + os.path.basename(remote_source)\n\n try:\n os.makedirs(local_dir)\n except OSError as e:\n if e.errno != errno.EEXIST:\n logger.exception(\"Failed to create script_dir: {0}\".format(script_dir))\n raise BadScriptPath(e, self.hostname)\n\n # Easier to check this than to waste time trying to pull file and\n # realize there's a problem.\n if os.path.exists(local_dest):\n logger.exception(\"Remote file copy will overwrite a local file:{0}\".format(local_dest))\n raise FileExists(None, self.hostname, filename=local_dest)\n\n try:\n self.sftp_client.get(remote_source, local_dest)\n except Exception as e:\n logger.exception(\"File pull failed\")\n raise FileCopyException(e, self.hostname)\n\n return local_dest\n\n def close(self):\n return self.ssh_client.close()\n\n def isdir(self, path):\n \"\"\"Return true if the path refers to an existing directory.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to check.\n \"\"\"\n result = True\n try:\n self.sftp_client.lstat(path)\n except FileNotFoundError:\n result = False\n\n return result\n\n def makedirs(self, path, mode=511, exist_ok=False):\n \"\"\"Create a directory on the remote side.\n\n If intermediate directories do not exist, they will be created.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to create.\n mode : int\n Permissions (posix-style) for the newly-created directory.\n exist_ok : bool\n If False, raise an OSError if the target directory already exists.\n \"\"\"\n if exist_ok is False and self.isdir(path):\n raise OSError('Target directory {} already exists'.format(path))\n\n self.execute_wait('mkdir -p {}'.format(path))\n self.sftp_client.chmod(path, mode)\n\n def abspath(self, path):\n \"\"\"Return the absolute path on the remote side.\n\n Parameters\n ----------\n path : str\n Path for which the absolute path will be returned.\n \"\"\"\n return self.sftp_client.normalize(path)\n\n @property\n def script_dir(self):\n return self._script_dir\n\n @script_dir.setter\n def script_dir(self, value):\n self._script_dir = value\n", "path": "parsl/channels/ssh/ssh.py"}]} | 2,932 | 363 |
gh_patches_debug_4332 | rasdani/github-patches | git_diff | bids-standard__pybids-517 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
replace_entities() modifies entities
I guess it is by design, but replace_entities() modifies the input entities as it goes. I find any function that modifies the input values surprising, but it also means that previous path_patterns can affect the entities as they are iterated.
I think the function should return a new entities with the correct entities if this is useful. However, this failing, the function definitely shouldn't modify entities unless it actually returns something other than None.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bids/layout/writing.py`
Content:
```
1 '''
2 Contains helper functions that involve writing operations.
3 '''
4
5 import warnings
6 import os
7 import re
8 import sys
9 from ..utils import splitext, listify
10 from os.path import join, dirname, exists, islink, isabs, isdir
11
12
13 __all__ = ['replace_entities', 'build_path', 'write_contents_to_file']
14
15
16 def replace_entities(entities, pattern):
17 """
18 Replaces all entity names in a given pattern with the corresponding
19 values provided by entities.
20
21 Args:
22 entities (dict): A dictionary mapping entity names to entity values.
23 pattern (str): A path pattern that contains entity names denoted
24 by curly braces. Optional portions denoted by square braces.
25 For example: 'sub-{subject}/[var-{name}/]{id}.csv'
26 Accepted entity values, using regex matching, denoted within angle
27 brackets.
28 For example: 'sub-{subject<01|02>}/{task}.csv'
29
30 Returns:
31 A new string with the entity values inserted where entity names
32 were denoted in the provided pattern.
33 """
34 ents = re.findall(r'\{(.*?)\}', pattern)
35 new_path = pattern
36 for ent in ents:
37 match = re.search(r'([^|<]+)(<.*?>)?(\|.*)?', ent)
38 if match is None:
39 return None
40 name, valid, default = match.groups()
41 default = default[1:] if default is not None else default
42
43 if name in entities and valid is not None:
44 ent_val = str(entities[name])
45 if not re.match(valid[1:-1], ent_val):
46 if default is None:
47 return None
48 entities[name] = default
49
50 ent_val = entities.get(name, default)
51 if ent_val is None:
52 return None
53 new_path = new_path.replace('{%s}' % ent, str(ent_val))
54
55 return new_path
56
57
58 def build_path(entities, path_patterns, strict=False):
59 """
60 Constructs a path given a set of entities and a list of potential
61 filename patterns to use.
62
63 Args:
64 entities (dict): A dictionary mapping entity names to entity values.
65 path_patterns (str, list): One or more filename patterns to write
66 the file to. Entities should be represented by the name
67 surrounded by curly braces. Optional portions of the patterns
68 should be denoted by square brackets. Entities that require a
69 specific value for the pattern to match can pass them inside
70 carets. Default values can be assigned by specifying a string after
71 the pipe operator. E.g., (e.g., {type<image>|bold} would only match
72 the pattern if the entity 'type' was passed and its value is
73 "image", otherwise the default value "bold" will be used).
74 Example 1: 'sub-{subject}/[var-{name}/]{id}.csv'
75 Result 2: 'sub-01/var-SES/1045.csv'
76 strict (bool): If True, all passed entities must be matched inside a
77 pattern in order to be a valid match. If False, extra entities will
78 be ignored so long as all mandatory entities are found.
79
80 Returns:
81 A constructed path for this file based on the provided patterns.
82 """
83 path_patterns = listify(path_patterns)
84
85 # Loop over available patherns, return first one that matches all
86 for pattern in path_patterns:
87 # If strict, all entities must be contained in the pattern
88 if strict:
89 defined = re.findall(r'\{(.*?)(?:<[^>]+>)?\}', pattern)
90 if set(entities.keys()) - set(defined):
91 continue
92 # Iterate through the provided path patterns
93 new_path = pattern
94 optional_patterns = re.findall(r'\[(.*?)\]', pattern)
95 # First build from optional patterns if possible
96 for optional_pattern in optional_patterns:
97 optional_chunk = replace_entities(entities, optional_pattern) or ''
98 new_path = new_path.replace('[%s]' % optional_pattern,
99 optional_chunk)
100 # Replace remaining entities
101 new_path = replace_entities(entities, new_path)
102
103 if new_path:
104 return new_path
105
106 return None
107
108
109 def write_contents_to_file(path, contents=None, link_to=None,
110 content_mode='text', root=None, conflicts='fail'):
111 """
112 Uses provided filename patterns to write contents to a new path, given
113 a corresponding entity map.
114
115 Args:
116 path (str): Destination path of the desired contents.
117 contents (str): Raw text or binary encoded string of contents to write
118 to the new path.
119 link_to (str): Optional path with which to create a symbolic link to.
120 Used as an alternative to and takes priority over the contents
121 argument.
122 content_mode (str): Either 'text' or 'binary' to indicate the writing
123 mode for the new file. Only relevant if contents is provided.
124 root (str): Optional root directory that all patterns are relative
125 to. Defaults to current working directory.
126 conflicts (str): One of 'fail', 'skip', 'overwrite', or 'append'
127 that defines the desired action when the output path already
128 exists. 'fail' raises an exception; 'skip' does nothing;
129 'overwrite' overwrites the existing file; 'append' adds a suffix
130 to each file copy, starting with 1. Default is 'fail'.
131 """
132
133 if root is None and not isabs(path):
134 root = os.getcwd()
135
136 if root:
137 path = join(root, path)
138
139 if exists(path) or islink(path):
140 if conflicts == 'fail':
141 msg = 'A file at path {} already exists.'
142 raise ValueError(msg.format(path))
143 elif conflicts == 'skip':
144 msg = 'A file at path {} already exists, skipping writing file.'
145 warnings.warn(msg.format(path))
146 return
147 elif conflicts == 'overwrite':
148 if isdir(path):
149 warnings.warn('New path is a directory, not going to '
150 'overwrite it, skipping instead.')
151 return
152 os.remove(path)
153 elif conflicts == 'append':
154 i = 1
155 while i < sys.maxsize:
156 path_splits = splitext(path)
157 path_splits[0] = path_splits[0] + '_%d' % i
158 appended_filename = os.extsep.join(path_splits)
159 if not exists(appended_filename) and \
160 not islink(appended_filename):
161 path = appended_filename
162 break
163 i += 1
164 else:
165 raise ValueError('Did not provide a valid conflicts parameter')
166
167 if not exists(dirname(path)):
168 os.makedirs(dirname(path))
169
170 if link_to:
171 os.symlink(link_to, path)
172 elif contents:
173 mode = 'wb' if content_mode == 'binary' else 'w'
174 with open(path, mode) as f:
175 f.write(contents)
176 else:
177 raise ValueError('One of contents or link_to must be provided.')
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bids/layout/writing.py b/bids/layout/writing.py
--- a/bids/layout/writing.py
+++ b/bids/layout/writing.py
@@ -31,6 +31,7 @@
A new string with the entity values inserted where entity names
were denoted in the provided pattern.
"""
+ entities = entities.copy() # make a local copy, since dicts are mutable
ents = re.findall(r'\{(.*?)\}', pattern)
new_path = pattern
for ent in ents:
| {"golden_diff": "diff --git a/bids/layout/writing.py b/bids/layout/writing.py\n--- a/bids/layout/writing.py\n+++ b/bids/layout/writing.py\n@@ -31,6 +31,7 @@\n A new string with the entity values inserted where entity names\n were denoted in the provided pattern.\n \"\"\"\n+ entities = entities.copy() # make a local copy, since dicts are mutable\n ents = re.findall(r'\\{(.*?)\\}', pattern)\n new_path = pattern\n for ent in ents:\n", "issue": "replace_entities() modifies entities\nI guess it is by design, but replace_entities() modifies the input entities as it goes. I find any function that modifies the input values surprising, but it also means that previous path_patterns can affect the entities as they are iterated.\r\n\r\nI think the function should return a new entities with the correct entities if this is useful. However, this failing, the function definitely shouldn't modify entities unless it actually returns something other than None.\n", "before_files": [{"content": "'''\nContains helper functions that involve writing operations.\n'''\n\nimport warnings\nimport os\nimport re\nimport sys\nfrom ..utils import splitext, listify\nfrom os.path import join, dirname, exists, islink, isabs, isdir\n\n\n__all__ = ['replace_entities', 'build_path', 'write_contents_to_file']\n\n\ndef replace_entities(entities, pattern):\n \"\"\"\n Replaces all entity names in a given pattern with the corresponding\n values provided by entities.\n\n Args:\n entities (dict): A dictionary mapping entity names to entity values.\n pattern (str): A path pattern that contains entity names denoted\n by curly braces. Optional portions denoted by square braces.\n For example: 'sub-{subject}/[var-{name}/]{id}.csv'\n Accepted entity values, using regex matching, denoted within angle\n brackets.\n For example: 'sub-{subject<01|02>}/{task}.csv'\n\n Returns:\n A new string with the entity values inserted where entity names\n were denoted in the provided pattern.\n \"\"\"\n ents = re.findall(r'\\{(.*?)\\}', pattern)\n new_path = pattern\n for ent in ents:\n match = re.search(r'([^|<]+)(<.*?>)?(\\|.*)?', ent)\n if match is None:\n return None\n name, valid, default = match.groups()\n default = default[1:] if default is not None else default\n\n if name in entities and valid is not None:\n ent_val = str(entities[name])\n if not re.match(valid[1:-1], ent_val):\n if default is None:\n return None\n entities[name] = default\n\n ent_val = entities.get(name, default)\n if ent_val is None:\n return None\n new_path = new_path.replace('{%s}' % ent, str(ent_val))\n\n return new_path\n\n\ndef build_path(entities, path_patterns, strict=False):\n \"\"\"\n Constructs a path given a set of entities and a list of potential\n filename patterns to use.\n\n Args:\n entities (dict): A dictionary mapping entity names to entity values.\n path_patterns (str, list): One or more filename patterns to write\n the file to. Entities should be represented by the name\n surrounded by curly braces. Optional portions of the patterns\n should be denoted by square brackets. Entities that require a\n specific value for the pattern to match can pass them inside\n carets. Default values can be assigned by specifying a string after\n the pipe operator. E.g., (e.g., {type<image>|bold} would only match\n the pattern if the entity 'type' was passed and its value is\n \"image\", otherwise the default value \"bold\" will be used).\n Example 1: 'sub-{subject}/[var-{name}/]{id}.csv'\n Result 2: 'sub-01/var-SES/1045.csv'\n strict (bool): If True, all passed entities must be matched inside a\n pattern in order to be a valid match. If False, extra entities will\n be ignored so long as all mandatory entities are found.\n\n Returns:\n A constructed path for this file based on the provided patterns.\n \"\"\"\n path_patterns = listify(path_patterns)\n\n # Loop over available patherns, return first one that matches all\n for pattern in path_patterns:\n # If strict, all entities must be contained in the pattern\n if strict:\n defined = re.findall(r'\\{(.*?)(?:<[^>]+>)?\\}', pattern)\n if set(entities.keys()) - set(defined):\n continue\n # Iterate through the provided path patterns\n new_path = pattern\n optional_patterns = re.findall(r'\\[(.*?)\\]', pattern)\n # First build from optional patterns if possible\n for optional_pattern in optional_patterns:\n optional_chunk = replace_entities(entities, optional_pattern) or ''\n new_path = new_path.replace('[%s]' % optional_pattern,\n optional_chunk)\n # Replace remaining entities\n new_path = replace_entities(entities, new_path)\n\n if new_path:\n return new_path\n\n return None\n\n\ndef write_contents_to_file(path, contents=None, link_to=None,\n content_mode='text', root=None, conflicts='fail'):\n \"\"\"\n Uses provided filename patterns to write contents to a new path, given\n a corresponding entity map.\n\n Args:\n path (str): Destination path of the desired contents.\n contents (str): Raw text or binary encoded string of contents to write\n to the new path.\n link_to (str): Optional path with which to create a symbolic link to.\n Used as an alternative to and takes priority over the contents\n argument.\n content_mode (str): Either 'text' or 'binary' to indicate the writing\n mode for the new file. Only relevant if contents is provided.\n root (str): Optional root directory that all patterns are relative\n to. Defaults to current working directory.\n conflicts (str): One of 'fail', 'skip', 'overwrite', or 'append'\n that defines the desired action when the output path already\n exists. 'fail' raises an exception; 'skip' does nothing;\n 'overwrite' overwrites the existing file; 'append' adds a suffix\n to each file copy, starting with 1. Default is 'fail'.\n \"\"\"\n\n if root is None and not isabs(path):\n root = os.getcwd()\n\n if root:\n path = join(root, path)\n\n if exists(path) or islink(path):\n if conflicts == 'fail':\n msg = 'A file at path {} already exists.'\n raise ValueError(msg.format(path))\n elif conflicts == 'skip':\n msg = 'A file at path {} already exists, skipping writing file.'\n warnings.warn(msg.format(path))\n return\n elif conflicts == 'overwrite':\n if isdir(path):\n warnings.warn('New path is a directory, not going to '\n 'overwrite it, skipping instead.')\n return\n os.remove(path)\n elif conflicts == 'append':\n i = 1\n while i < sys.maxsize:\n path_splits = splitext(path)\n path_splits[0] = path_splits[0] + '_%d' % i\n appended_filename = os.extsep.join(path_splits)\n if not exists(appended_filename) and \\\n not islink(appended_filename):\n path = appended_filename\n break\n i += 1\n else:\n raise ValueError('Did not provide a valid conflicts parameter')\n\n if not exists(dirname(path)):\n os.makedirs(dirname(path))\n\n if link_to:\n os.symlink(link_to, path)\n elif contents:\n mode = 'wb' if content_mode == 'binary' else 'w'\n with open(path, mode) as f:\n f.write(contents)\n else:\n raise ValueError('One of contents or link_to must be provided.')\n", "path": "bids/layout/writing.py"}], "after_files": [{"content": "'''\nContains helper functions that involve writing operations.\n'''\n\nimport warnings\nimport os\nimport re\nimport sys\nfrom ..utils import splitext, listify\nfrom os.path import join, dirname, exists, islink, isabs, isdir\n\n\n__all__ = ['replace_entities', 'build_path', 'write_contents_to_file']\n\n\ndef replace_entities(entities, pattern):\n \"\"\"\n Replaces all entity names in a given pattern with the corresponding\n values provided by entities.\n\n Args:\n entities (dict): A dictionary mapping entity names to entity values.\n pattern (str): A path pattern that contains entity names denoted\n by curly braces. Optional portions denoted by square braces.\n For example: 'sub-{subject}/[var-{name}/]{id}.csv'\n Accepted entity values, using regex matching, denoted within angle\n brackets.\n For example: 'sub-{subject<01|02>}/{task}.csv'\n\n Returns:\n A new string with the entity values inserted where entity names\n were denoted in the provided pattern.\n \"\"\"\n entities = entities.copy() # make a local copy, since dicts are mutable\n ents = re.findall(r'\\{(.*?)\\}', pattern)\n new_path = pattern\n for ent in ents:\n match = re.search(r'([^|<]+)(<.*?>)?(\\|.*)?', ent)\n if match is None:\n return None\n name, valid, default = match.groups()\n default = default[1:] if default is not None else default\n\n if name in entities and valid is not None:\n ent_val = str(entities[name])\n if not re.match(valid[1:-1], ent_val):\n if default is None:\n return None\n entities[name] = default\n\n ent_val = entities.get(name, default)\n if ent_val is None:\n return None\n new_path = new_path.replace('{%s}' % ent, str(ent_val))\n\n return new_path\n\n\ndef build_path(entities, path_patterns, strict=False):\n \"\"\"\n Constructs a path given a set of entities and a list of potential\n filename patterns to use.\n\n Args:\n entities (dict): A dictionary mapping entity names to entity values.\n path_patterns (str, list): One or more filename patterns to write\n the file to. Entities should be represented by the name\n surrounded by curly braces. Optional portions of the patterns\n should be denoted by square brackets. Entities that require a\n specific value for the pattern to match can pass them inside\n carets. Default values can be assigned by specifying a string after\n the pipe operator. E.g., (e.g., {type<image>|bold} would only match\n the pattern if the entity 'type' was passed and its value is\n \"image\", otherwise the default value \"bold\" will be used).\n Example 1: 'sub-{subject}/[var-{name}/]{id}.csv'\n Result 2: 'sub-01/var-SES/1045.csv'\n strict (bool): If True, all passed entities must be matched inside a\n pattern in order to be a valid match. If False, extra entities will\n be ignored so long as all mandatory entities are found.\n\n Returns:\n A constructed path for this file based on the provided patterns.\n \"\"\"\n path_patterns = listify(path_patterns)\n\n # Loop over available patherns, return first one that matches all\n for pattern in path_patterns:\n # If strict, all entities must be contained in the pattern\n if strict:\n defined = re.findall(r'\\{(.*?)(?:<[^>]+>)?\\}', pattern)\n if set(entities.keys()) - set(defined):\n continue\n # Iterate through the provided path patterns\n new_path = pattern\n optional_patterns = re.findall(r'\\[(.*?)\\]', pattern)\n # First build from optional patterns if possible\n for optional_pattern in optional_patterns:\n optional_chunk = replace_entities(entities, optional_pattern) or ''\n new_path = new_path.replace('[%s]' % optional_pattern,\n optional_chunk)\n # Replace remaining entities\n new_path = replace_entities(entities, new_path)\n\n if new_path:\n return new_path\n\n return None\n\n\ndef write_contents_to_file(path, contents=None, link_to=None,\n content_mode='text', root=None, conflicts='fail'):\n \"\"\"\n Uses provided filename patterns to write contents to a new path, given\n a corresponding entity map.\n\n Args:\n path (str): Destination path of the desired contents.\n contents (str): Raw text or binary encoded string of contents to write\n to the new path.\n link_to (str): Optional path with which to create a symbolic link to.\n Used as an alternative to and takes priority over the contents\n argument.\n content_mode (str): Either 'text' or 'binary' to indicate the writing\n mode for the new file. Only relevant if contents is provided.\n root (str): Optional root directory that all patterns are relative\n to. Defaults to current working directory.\n conflicts (str): One of 'fail', 'skip', 'overwrite', or 'append'\n that defines the desired action when the output path already\n exists. 'fail' raises an exception; 'skip' does nothing;\n 'overwrite' overwrites the existing file; 'append' adds a suffix\n to each file copy, starting with 1. Default is 'fail'.\n \"\"\"\n\n if root is None and not isabs(path):\n root = os.getcwd()\n\n if root:\n path = join(root, path)\n\n if exists(path) or islink(path):\n if conflicts == 'fail':\n msg = 'A file at path {} already exists.'\n raise ValueError(msg.format(path))\n elif conflicts == 'skip':\n msg = 'A file at path {} already exists, skipping writing file.'\n warnings.warn(msg.format(path))\n return\n elif conflicts == 'overwrite':\n if isdir(path):\n warnings.warn('New path is a directory, not going to '\n 'overwrite it, skipping instead.')\n return\n os.remove(path)\n elif conflicts == 'append':\n i = 1\n while i < sys.maxsize:\n path_splits = splitext(path)\n path_splits[0] = path_splits[0] + '_%d' % i\n appended_filename = os.extsep.join(path_splits)\n if not exists(appended_filename) and \\\n not islink(appended_filename):\n path = appended_filename\n break\n i += 1\n else:\n raise ValueError('Did not provide a valid conflicts parameter')\n\n if not exists(dirname(path)):\n os.makedirs(dirname(path))\n\n if link_to:\n os.symlink(link_to, path)\n elif contents:\n mode = 'wb' if content_mode == 'binary' else 'w'\n with open(path, mode) as f:\n f.write(contents)\n else:\n raise ValueError('One of contents or link_to must be provided.')\n", "path": "bids/layout/writing.py"}]} | 2,282 | 116 |
gh_patches_debug_17236 | rasdani/github-patches | git_diff | pyca__cryptography-3638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update release automation for new wheel builder
Once #3636 is merged we need to update the release automation to trigger the new wheel builder and download the artifacts.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `release.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import io
9 import os
10 import subprocess
11 import time
12
13 import click
14
15 from clint.textui.progress import Bar as ProgressBar
16
17 import requests
18
19
20 JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder"
21
22
23 def run(*args, **kwargs):
24 kwargs.setdefault("stderr", subprocess.STDOUT)
25 subprocess.check_output(list(args), **kwargs)
26
27
28 def wait_for_build_completed(session):
29 # Wait 20 seconds before actually checking if the build is complete, to
30 # ensure that it had time to really start.
31 time.sleep(20)
32 while True:
33 response = session.get(
34 "{0}/lastBuild/api/json/".format(JENKINS_URL),
35 headers={
36 "Accept": "application/json",
37 }
38 )
39 response.raise_for_status()
40 if not response.json()["building"]:
41 assert response.json()["result"] == "SUCCESS"
42 break
43 time.sleep(0.1)
44
45
46 def download_artifacts(session):
47 response = session.get(
48 "{0}/lastBuild/api/json/".format(JENKINS_URL),
49 headers={
50 "Accept": "application/json"
51 }
52 )
53 response.raise_for_status()
54 assert not response.json()["building"]
55 assert response.json()["result"] == "SUCCESS"
56
57 paths = []
58
59 last_build_number = response.json()["number"]
60 for run in response.json()["runs"]:
61 if run["number"] != last_build_number:
62 print(
63 "Skipping {0} as it is not from the latest build ({1})".format(
64 run["url"], last_build_number
65 )
66 )
67 continue
68
69 response = session.get(
70 run["url"] + "api/json/",
71 headers={
72 "Accept": "application/json",
73 }
74 )
75 response.raise_for_status()
76 for artifact in response.json()["artifacts"]:
77 response = session.get(
78 "{0}artifact/{1}".format(run["url"], artifact["relativePath"]),
79 stream=True
80 )
81 assert response.headers["content-length"]
82 print("Downloading {0}".format(artifact["fileName"]))
83 bar = ProgressBar(
84 expected_size=int(response.headers["content-length"]),
85 filled_char="="
86 )
87 content = io.BytesIO()
88 for data in response.iter_content(chunk_size=8192):
89 content.write(data)
90 bar.show(content.tell())
91 assert bar.expected_size == content.tell()
92 bar.done()
93 out_path = os.path.join(
94 os.path.dirname(__file__),
95 "dist",
96 artifact["fileName"],
97 )
98 with open(out_path, "wb") as f:
99 f.write(content.getvalue())
100 paths.append(out_path)
101 return paths
102
103
104 @click.command()
105 @click.argument("version")
106 def release(version):
107 """
108 ``version`` should be a string like '0.4' or '1.0'.
109 """
110 run("git", "tag", "-s", version, "-m", "{0} release".format(version))
111 run("git", "push", "--tags")
112
113 run("python", "setup.py", "sdist")
114 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/")
115
116 run(
117 "twine", "upload", "-s", "dist/cryptography-{0}*".format(version),
118 "vectors/dist/cryptography_vectors-{0}*".format(version), shell=True
119 )
120
121 session = requests.Session()
122
123 # This tells the CDN to delete the cached response for the URL. We do this
124 # so that the Jenkins builders will see the new sdist immediately when they
125 # go to build the wheels.
126 response = session.request(
127 "PURGE", "https://pypi.python.org/simple/cryptography/"
128 )
129 response.raise_for_status()
130
131 username = getpass.getpass("Input the GitHub/Jenkins username: ")
132 token = getpass.getpass("Input the Jenkins token: ")
133 response = session.post(
134 "{0}/build".format(JENKINS_URL),
135 auth=requests.auth.HTTPBasicAuth(
136 username, token
137 ),
138 params={
139 "cause": "Building wheels for {0}".format(version)
140 }
141 )
142 response.raise_for_status()
143 wait_for_build_completed(session)
144 paths = download_artifacts(session)
145 run("twine", "upload", " ".join(paths))
146
147
148 if __name__ == "__main__":
149 release()
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/release.py b/release.py
--- a/release.py
+++ b/release.py
@@ -17,7 +17,10 @@
import requests
-JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder"
+JENKINS_URL = (
+ "https://ci.cryptography.io/job/cryptography-support-jobs/"
+ "job/wheel-builder"
+)
def run(*args, **kwargs):
@@ -128,14 +131,11 @@
)
response.raise_for_status()
- username = getpass.getpass("Input the GitHub/Jenkins username: ")
token = getpass.getpass("Input the Jenkins token: ")
- response = session.post(
+ response = session.get(
"{0}/build".format(JENKINS_URL),
- auth=requests.auth.HTTPBasicAuth(
- username, token
- ),
params={
+ "token": token,
"cause": "Building wheels for {0}".format(version)
}
)
| {"golden_diff": "diff --git a/release.py b/release.py\n--- a/release.py\n+++ b/release.py\n@@ -17,7 +17,10 @@\n import requests\n \n \n-JENKINS_URL = \"https://jenkins.cryptography.io/job/cryptography-wheel-builder\"\n+JENKINS_URL = (\n+ \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n+ \"job/wheel-builder\"\n+)\n \n \n def run(*args, **kwargs):\n@@ -128,14 +131,11 @@\n )\n response.raise_for_status()\n \n- username = getpass.getpass(\"Input the GitHub/Jenkins username: \")\n token = getpass.getpass(\"Input the Jenkins token: \")\n- response = session.post(\n+ response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n- auth=requests.auth.HTTPBasicAuth(\n- username, token\n- ),\n params={\n+ \"token\": token,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n", "issue": "Update release automation for new wheel builder\nOnce #3636 is merged we need to update the release automation to trigger the new wheel builder and download the artifacts.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = \"https://jenkins.cryptography.io/job/cryptography-wheel-builder\"\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n subprocess.check_output(list(args), **kwargs)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n assert not response.json()[\"building\"]\n assert response.json()[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n last_build_number = response.json()[\"number\"]\n for run in response.json()[\"runs\"]:\n if run[\"number\"] != last_build_number:\n print(\n \"Skipping {0} as it is not from the latest build ({1})\".format(\n run[\"url\"], last_build_number\n )\n )\n continue\n\n response = session.get(\n run[\"url\"] + \"api/json/\",\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(run[\"url\"], artifact[\"relativePath\"]),\n stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n username = getpass.getpass(\"Input the GitHub/Jenkins username: \")\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.post(\n \"{0}/build\".format(JENKINS_URL),\n auth=requests.auth.HTTPBasicAuth(\n username, token\n ),\n params={\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n subprocess.check_output(list(args), **kwargs)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n assert not response.json()[\"building\"]\n assert response.json()[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n last_build_number = response.json()[\"number\"]\n for run in response.json()[\"runs\"]:\n if run[\"number\"] != last_build_number:\n print(\n \"Skipping {0} as it is not from the latest build ({1})\".format(\n run[\"url\"], last_build_number\n )\n )\n continue\n\n response = session.get(\n run[\"url\"] + \"api/json/\",\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(run[\"url\"], artifact[\"relativePath\"]),\n stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}]} | 1,645 | 230 |
gh_patches_debug_24985 | rasdani/github-patches | git_diff | comic__grand-challenge.org-2348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Archive Serializers use `id` rather than `pk`
Some of our serializers use `id` rather than `pk`, for consistency we should only use one and that should be `pk`. Check the other serializers and see if this occurs elsewhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/archives/serializers.py`
Content:
```
1 from django.db.transaction import on_commit
2 from guardian.shortcuts import get_objects_for_user
3 from rest_framework import serializers
4 from rest_framework.fields import ReadOnlyField, URLField
5 from rest_framework.relations import HyperlinkedRelatedField
6
7 from grandchallenge.archives.models import Archive, ArchiveItem
8 from grandchallenge.archives.tasks import (
9 start_archive_item_update_tasks,
10 update_archive_item_update_kwargs,
11 )
12 from grandchallenge.components.serializers import (
13 ComponentInterfaceValuePostSerializer,
14 ComponentInterfaceValueSerializer,
15 )
16 from grandchallenge.hanging_protocols.serializers import (
17 HangingProtocolSerializer,
18 )
19
20
21 class ArchiveItemSerializer(serializers.ModelSerializer):
22 archive = HyperlinkedRelatedField(
23 read_only=True, view_name="api:archive-detail"
24 )
25 values = ComponentInterfaceValueSerializer(many=True)
26
27 class Meta:
28 model = ArchiveItem
29 fields = ("id", "archive", "values")
30
31
32 class ArchiveSerializer(serializers.ModelSerializer):
33 algorithms = HyperlinkedRelatedField(
34 read_only=True, many=True, view_name="api:algorithm-detail"
35 )
36 logo = URLField(source="logo.x20.url", read_only=True)
37 url = URLField(source="get_absolute_url", read_only=True)
38 # Include the read only name for legacy clients
39 name = ReadOnlyField()
40 hanging_protocol = HangingProtocolSerializer()
41
42 class Meta:
43 model = Archive
44 fields = (
45 "id",
46 "name",
47 "title",
48 "algorithms",
49 "logo",
50 "description",
51 "api_url",
52 "url",
53 "hanging_protocol",
54 "view_content",
55 )
56
57
58 class ArchiveItemPostSerializer(ArchiveItemSerializer):
59 archive = HyperlinkedRelatedField(
60 queryset=Archive.objects.none(),
61 view_name="api:archive-detail",
62 write_only=True,
63 )
64
65 def __init__(self, *args, **kwargs):
66 super().__init__(*args, **kwargs)
67 self.fields["values"] = ComponentInterfaceValuePostSerializer(
68 many=True, context=self.context
69 )
70
71 if "request" in self.context:
72 user = self.context["request"].user
73
74 self.fields["archive"].queryset = get_objects_for_user(
75 user, "archives.use_archive", accept_global_perms=False
76 )
77
78 def update(self, instance, validated_data):
79 civs = validated_data.pop("values")
80
81 civ_pks_to_remove = set()
82 civ_pks_to_add = set()
83 upload_pks = {}
84
85 for civ in civs:
86 interface = civ.pop("interface", None)
87 upload_session = civ.pop("upload_session", None)
88 value = civ.pop("value", None)
89 image = civ.pop("image", None)
90 user_upload = civ.pop("user_upload", None)
91
92 update_archive_item_update_kwargs(
93 instance=instance,
94 interface=interface,
95 value=value,
96 image=image,
97 user_upload=user_upload,
98 upload_session=upload_session,
99 civ_pks_to_add=civ_pks_to_add,
100 civ_pks_to_remove=civ_pks_to_remove,
101 upload_pks=upload_pks,
102 )
103
104 on_commit(
105 start_archive_item_update_tasks.signature(
106 kwargs={
107 "archive_item_pk": instance.pk,
108 "civ_pks_to_add": list(civ_pks_to_add),
109 "civ_pks_to_remove": list(civ_pks_to_remove),
110 "upload_pks": upload_pks,
111 }
112 ).apply_async
113 )
114
115 return instance
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/archives/serializers.py b/app/grandchallenge/archives/serializers.py
--- a/app/grandchallenge/archives/serializers.py
+++ b/app/grandchallenge/archives/serializers.py
@@ -11,7 +11,7 @@
)
from grandchallenge.components.serializers import (
ComponentInterfaceValuePostSerializer,
- ComponentInterfaceValueSerializer,
+ HyperlinkedComponentInterfaceValueSerializer,
)
from grandchallenge.hanging_protocols.serializers import (
HangingProtocolSerializer,
@@ -22,11 +22,11 @@
archive = HyperlinkedRelatedField(
read_only=True, view_name="api:archive-detail"
)
- values = ComponentInterfaceValueSerializer(many=True)
+ values = HyperlinkedComponentInterfaceValueSerializer(many=True)
class Meta:
model = ArchiveItem
- fields = ("id", "archive", "values")
+ fields = ("pk", "archive", "values")
class ArchiveSerializer(serializers.ModelSerializer):
@@ -42,7 +42,7 @@
class Meta:
model = Archive
fields = (
- "id",
+ "pk",
"name",
"title",
"algorithms",
| {"golden_diff": "diff --git a/app/grandchallenge/archives/serializers.py b/app/grandchallenge/archives/serializers.py\n--- a/app/grandchallenge/archives/serializers.py\n+++ b/app/grandchallenge/archives/serializers.py\n@@ -11,7 +11,7 @@\n )\n from grandchallenge.components.serializers import (\n ComponentInterfaceValuePostSerializer,\n- ComponentInterfaceValueSerializer,\n+ HyperlinkedComponentInterfaceValueSerializer,\n )\n from grandchallenge.hanging_protocols.serializers import (\n HangingProtocolSerializer,\n@@ -22,11 +22,11 @@\n archive = HyperlinkedRelatedField(\n read_only=True, view_name=\"api:archive-detail\"\n )\n- values = ComponentInterfaceValueSerializer(many=True)\n+ values = HyperlinkedComponentInterfaceValueSerializer(many=True)\n \n class Meta:\n model = ArchiveItem\n- fields = (\"id\", \"archive\", \"values\")\n+ fields = (\"pk\", \"archive\", \"values\")\n \n \n class ArchiveSerializer(serializers.ModelSerializer):\n@@ -42,7 +42,7 @@\n class Meta:\n model = Archive\n fields = (\n- \"id\",\n+ \"pk\",\n \"name\",\n \"title\",\n \"algorithms\",\n", "issue": "Archive Serializers use `id` rather than `pk`\nSome of our serializers use `id` rather than `pk`, for consistency we should only use one and that should be `pk`. Check the other serializers and see if this occurs elsewhere.\n", "before_files": [{"content": "from django.db.transaction import on_commit\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework import serializers\nfrom rest_framework.fields import ReadOnlyField, URLField\nfrom rest_framework.relations import HyperlinkedRelatedField\n\nfrom grandchallenge.archives.models import Archive, ArchiveItem\nfrom grandchallenge.archives.tasks import (\n start_archive_item_update_tasks,\n update_archive_item_update_kwargs,\n)\nfrom grandchallenge.components.serializers import (\n ComponentInterfaceValuePostSerializer,\n ComponentInterfaceValueSerializer,\n)\nfrom grandchallenge.hanging_protocols.serializers import (\n HangingProtocolSerializer,\n)\n\n\nclass ArchiveItemSerializer(serializers.ModelSerializer):\n archive = HyperlinkedRelatedField(\n read_only=True, view_name=\"api:archive-detail\"\n )\n values = ComponentInterfaceValueSerializer(many=True)\n\n class Meta:\n model = ArchiveItem\n fields = (\"id\", \"archive\", \"values\")\n\n\nclass ArchiveSerializer(serializers.ModelSerializer):\n algorithms = HyperlinkedRelatedField(\n read_only=True, many=True, view_name=\"api:algorithm-detail\"\n )\n logo = URLField(source=\"logo.x20.url\", read_only=True)\n url = URLField(source=\"get_absolute_url\", read_only=True)\n # Include the read only name for legacy clients\n name = ReadOnlyField()\n hanging_protocol = HangingProtocolSerializer()\n\n class Meta:\n model = Archive\n fields = (\n \"id\",\n \"name\",\n \"title\",\n \"algorithms\",\n \"logo\",\n \"description\",\n \"api_url\",\n \"url\",\n \"hanging_protocol\",\n \"view_content\",\n )\n\n\nclass ArchiveItemPostSerializer(ArchiveItemSerializer):\n archive = HyperlinkedRelatedField(\n queryset=Archive.objects.none(),\n view_name=\"api:archive-detail\",\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"values\"] = ComponentInterfaceValuePostSerializer(\n many=True, context=self.context\n )\n\n if \"request\" in self.context:\n user = self.context[\"request\"].user\n\n self.fields[\"archive\"].queryset = get_objects_for_user(\n user, \"archives.use_archive\", accept_global_perms=False\n )\n\n def update(self, instance, validated_data):\n civs = validated_data.pop(\"values\")\n\n civ_pks_to_remove = set()\n civ_pks_to_add = set()\n upload_pks = {}\n\n for civ in civs:\n interface = civ.pop(\"interface\", None)\n upload_session = civ.pop(\"upload_session\", None)\n value = civ.pop(\"value\", None)\n image = civ.pop(\"image\", None)\n user_upload = civ.pop(\"user_upload\", None)\n\n update_archive_item_update_kwargs(\n instance=instance,\n interface=interface,\n value=value,\n image=image,\n user_upload=user_upload,\n upload_session=upload_session,\n civ_pks_to_add=civ_pks_to_add,\n civ_pks_to_remove=civ_pks_to_remove,\n upload_pks=upload_pks,\n )\n\n on_commit(\n start_archive_item_update_tasks.signature(\n kwargs={\n \"archive_item_pk\": instance.pk,\n \"civ_pks_to_add\": list(civ_pks_to_add),\n \"civ_pks_to_remove\": list(civ_pks_to_remove),\n \"upload_pks\": upload_pks,\n }\n ).apply_async\n )\n\n return instance\n", "path": "app/grandchallenge/archives/serializers.py"}], "after_files": [{"content": "from django.db.transaction import on_commit\nfrom guardian.shortcuts import get_objects_for_user\nfrom rest_framework import serializers\nfrom rest_framework.fields import ReadOnlyField, URLField\nfrom rest_framework.relations import HyperlinkedRelatedField\n\nfrom grandchallenge.archives.models import Archive, ArchiveItem\nfrom grandchallenge.archives.tasks import (\n start_archive_item_update_tasks,\n update_archive_item_update_kwargs,\n)\nfrom grandchallenge.components.serializers import (\n ComponentInterfaceValuePostSerializer,\n HyperlinkedComponentInterfaceValueSerializer,\n)\nfrom grandchallenge.hanging_protocols.serializers import (\n HangingProtocolSerializer,\n)\n\n\nclass ArchiveItemSerializer(serializers.ModelSerializer):\n archive = HyperlinkedRelatedField(\n read_only=True, view_name=\"api:archive-detail\"\n )\n values = HyperlinkedComponentInterfaceValueSerializer(many=True)\n\n class Meta:\n model = ArchiveItem\n fields = (\"pk\", \"archive\", \"values\")\n\n\nclass ArchiveSerializer(serializers.ModelSerializer):\n algorithms = HyperlinkedRelatedField(\n read_only=True, many=True, view_name=\"api:algorithm-detail\"\n )\n logo = URLField(source=\"logo.x20.url\", read_only=True)\n url = URLField(source=\"get_absolute_url\", read_only=True)\n # Include the read only name for legacy clients\n name = ReadOnlyField()\n hanging_protocol = HangingProtocolSerializer()\n\n class Meta:\n model = Archive\n fields = (\n \"pk\",\n \"name\",\n \"title\",\n \"algorithms\",\n \"logo\",\n \"description\",\n \"api_url\",\n \"url\",\n \"hanging_protocol\",\n \"view_content\",\n )\n\n\nclass ArchiveItemPostSerializer(ArchiveItemSerializer):\n archive = HyperlinkedRelatedField(\n queryset=Archive.objects.none(),\n view_name=\"api:archive-detail\",\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"values\"] = ComponentInterfaceValuePostSerializer(\n many=True, context=self.context\n )\n\n if \"request\" in self.context:\n user = self.context[\"request\"].user\n\n self.fields[\"archive\"].queryset = get_objects_for_user(\n user, \"archives.use_archive\", accept_global_perms=False\n )\n\n def update(self, instance, validated_data):\n civs = validated_data.pop(\"values\")\n\n civ_pks_to_remove = set()\n civ_pks_to_add = set()\n upload_pks = {}\n\n for civ in civs:\n interface = civ.pop(\"interface\", None)\n upload_session = civ.pop(\"upload_session\", None)\n value = civ.pop(\"value\", None)\n image = civ.pop(\"image\", None)\n user_upload = civ.pop(\"user_upload\", None)\n\n update_archive_item_update_kwargs(\n instance=instance,\n interface=interface,\n value=value,\n image=image,\n user_upload=user_upload,\n upload_session=upload_session,\n civ_pks_to_add=civ_pks_to_add,\n civ_pks_to_remove=civ_pks_to_remove,\n upload_pks=upload_pks,\n )\n\n on_commit(\n start_archive_item_update_tasks.signature(\n kwargs={\n \"archive_item_pk\": instance.pk,\n \"civ_pks_to_add\": list(civ_pks_to_add),\n \"civ_pks_to_remove\": list(civ_pks_to_remove),\n \"upload_pks\": upload_pks,\n }\n ).apply_async\n )\n\n return instance\n", "path": "app/grandchallenge/archives/serializers.py"}]} | 1,299 | 268 |
gh_patches_debug_32080 | rasdani/github-patches | git_diff | ManageIQ__integration_tests-4789 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Freeze.py screws up test running
The virtualenv that is left in requirments/ dir seems to interfere with normal operations so I always need to delete it, perhaps we need some ignore somewhere or need to place it elsewhere
```
../default/lib/python2.7/site-packages/py/_path/common.py:367: in visit
for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen
for p in self.gen(subdir):
../default/lib/python2.7/site-packages/py/_path/common.py:406: in gen
if p.check(dir=1) and (rec is None or rec(p))])
../default/lib/python2.7/site-packages/_pytest/main.py:682: in _recurse
ihook = self.gethookproxy(path)
../default/lib/python2.7/site-packages/_pytest/main.py:587: in gethookproxy
my_conftestmodules = pm._getconftestmodules(fspath)
../default/lib/python2.7/site-packages/_pytest/config.py:339: in _getconftestmodules
mod = self._importconftest(conftestpath)
../default/lib/python2.7/site-packages/_pytest/config.py:375: in _importconftest
self.consider_conftest(mod)
../default/lib/python2.7/site-packages/_pytest/config.py:398: in consider_conftest
if self.register(conftestmodule, name=conftestmodule.__file__):
../default/lib/python2.7/site-packages/_pytest/config.py:250: in register
ret = super(PytestPluginManager, self).register(plugin, name)
../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:371: in register
hook._maybe_apply_history(hookimpl)
../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:768: in _maybe_apply_history
res = self._hookexec(self, [method], kwargs)
../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:339: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:334: in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:614: in execute
res = hook_impl.function(*args)
requirements/temporary_venv/lib/python2.7/site-packages/tests/contrib/appengine/conftest.py:45: in pytest_configure
if config.getoption('gae_sdk') is not None:
../default/lib/python2.7/site-packages/_pytest/config.py:1195: in getoption
raise ValueError("no option named %r" % (name,))
E ValueError: no option named 'gae_sdk'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `requirements/freeze.py`
Content:
```
1 #!/usr/bin/env python
2 """
3 outputs the frozen packages
4 """
5 import sys
6 import os
7 import argparse
8 import subprocess
9 parser = argparse.ArgumentParser(description=__doc__.strip())
10 parser.add_argument('--venv', default='requirements/temporary_venv')
11 parser.add_argument(
12 "--template", default="requirements/template.txt",)
13 parser.add_argument(
14 "--out", default=sys.stdout, type=argparse.FileType('w'),
15 help='the file where packages should be written to')
16
17
18 def main(args):
19 if not os.path.isdir(args.venv):
20 subprocess.check_call([
21 sys.executable, '-m', 'virtualenv', args.venv
22 ])
23 subprocess.check_call([
24 os.path.join(args.venv, 'bin/pip'),
25 'install', '-U', '-r', args.template])
26
27 subprocess.check_call([
28 os.path.join(args.venv, 'bin/pip'), 'freeze'
29 ], stdout=args.out)
30
31
32 if __name__ == '__main__':
33 main(parser.parse_args())
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/requirements/freeze.py b/requirements/freeze.py
--- a/requirements/freeze.py
+++ b/requirements/freeze.py
@@ -2,31 +2,52 @@
"""
outputs the frozen packages
"""
+from __future__ import print_function
import sys
import os
import argparse
import subprocess
+import tempfile
+import shutil
parser = argparse.ArgumentParser(description=__doc__.strip())
-parser.add_argument('--venv', default='requirements/temporary_venv')
+parser.add_argument('--venv', default=None)
+parser.add_argument('--keep-venv', action='store_true')
parser.add_argument(
"--template", default="requirements/template.txt",)
parser.add_argument(
- "--out", default=sys.stdout, type=argparse.FileType('w'),
+ "--out", default=None,
help='the file where packages should be written to')
def main(args):
- if not os.path.isdir(args.venv):
+ if args.venv is None:
+ args.venv = tempfile.mkdtemp(suffix='-miq-QE-rebuild-venv')
+
+ try:
+ if not os.path.isdir(os.path.join(args.venv, 'bin')):
+ subprocess.check_call([
+ sys.executable, '-m', 'virtualenv', args.venv
+ ])
subprocess.check_call([
- sys.executable, '-m', 'virtualenv', args.venv
- ])
- subprocess.check_call([
- os.path.join(args.venv, 'bin/pip'),
- 'install', '-U', '-r', args.template])
+ os.path.join(args.venv, 'bin/pip'),
+ 'install', '-U', '-r', args.template])
+
+ if args.out is None:
+ subprocess.check_call([
+ os.path.join(args.venv, 'bin/pip'), 'freeze'
+ ], stdout=sys.stdout)
+ else:
+ with open(args.out) as out:
+ subprocess.check_call([
+ os.path.join(args.venv, 'bin/pip'), 'freeze'
+ ], stdout=out)
- subprocess.check_call([
- os.path.join(args.venv, 'bin/pip'), 'freeze'
- ], stdout=args.out)
+ subprocess.check_call([
+ os.path.join(args.venv, 'bin/pip'), 'freeze'
+ ], stdout=args.out)
+ finally:
+ if not args.keep_venv:
+ shutil.rmtree(args.venv)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/requirements/freeze.py b/requirements/freeze.py\n--- a/requirements/freeze.py\n+++ b/requirements/freeze.py\n@@ -2,31 +2,52 @@\n \"\"\"\n outputs the frozen packages\n \"\"\"\n+from __future__ import print_function\n import sys\n import os\n import argparse\n import subprocess\n+import tempfile\n+import shutil\n parser = argparse.ArgumentParser(description=__doc__.strip())\n-parser.add_argument('--venv', default='requirements/temporary_venv')\n+parser.add_argument('--venv', default=None)\n+parser.add_argument('--keep-venv', action='store_true')\n parser.add_argument(\n \"--template\", default=\"requirements/template.txt\",)\n parser.add_argument(\n- \"--out\", default=sys.stdout, type=argparse.FileType('w'),\n+ \"--out\", default=None,\n help='the file where packages should be written to')\n \n \n def main(args):\n- if not os.path.isdir(args.venv):\n+ if args.venv is None:\n+ args.venv = tempfile.mkdtemp(suffix='-miq-QE-rebuild-venv')\n+\n+ try:\n+ if not os.path.isdir(os.path.join(args.venv, 'bin')):\n+ subprocess.check_call([\n+ sys.executable, '-m', 'virtualenv', args.venv\n+ ])\n subprocess.check_call([\n- sys.executable, '-m', 'virtualenv', args.venv\n- ])\n- subprocess.check_call([\n- os.path.join(args.venv, 'bin/pip'),\n- 'install', '-U', '-r', args.template])\n+ os.path.join(args.venv, 'bin/pip'),\n+ 'install', '-U', '-r', args.template])\n+\n+ if args.out is None:\n+ subprocess.check_call([\n+ os.path.join(args.venv, 'bin/pip'), 'freeze'\n+ ], stdout=sys.stdout)\n+ else:\n+ with open(args.out) as out:\n+ subprocess.check_call([\n+ os.path.join(args.venv, 'bin/pip'), 'freeze'\n+ ], stdout=out)\n \n- subprocess.check_call([\n- os.path.join(args.venv, 'bin/pip'), 'freeze'\n- ], stdout=args.out)\n+ subprocess.check_call([\n+ os.path.join(args.venv, 'bin/pip'), 'freeze'\n+ ], stdout=args.out)\n+ finally:\n+ if not args.keep_venv:\n+ shutil.rmtree(args.venv)\n \n \n if __name__ == '__main__':\n", "issue": "Freeze.py screws up test running\nThe virtualenv that is left in requirments/ dir seems to interfere with normal operations so I always need to delete it, perhaps we need some ignore somewhere or need to place it elsewhere\r\n\r\n```\r\n../default/lib/python2.7/site-packages/py/_path/common.py:367: in visit\r\n for x in Visitor(fil, rec, ignore, bf, sort).gen(self):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:416: in gen\r\n for p in self.gen(subdir):\r\n../default/lib/python2.7/site-packages/py/_path/common.py:406: in gen\r\n if p.check(dir=1) and (rec is None or rec(p))])\r\n../default/lib/python2.7/site-packages/_pytest/main.py:682: in _recurse\r\n ihook = self.gethookproxy(path)\r\n../default/lib/python2.7/site-packages/_pytest/main.py:587: in gethookproxy\r\n my_conftestmodules = pm._getconftestmodules(fspath)\r\n../default/lib/python2.7/site-packages/_pytest/config.py:339: in _getconftestmodules\r\n mod = self._importconftest(conftestpath)\r\n../default/lib/python2.7/site-packages/_pytest/config.py:375: in _importconftest\r\n self.consider_conftest(mod)\r\n../default/lib/python2.7/site-packages/_pytest/config.py:398: in consider_conftest\r\n if self.register(conftestmodule, name=conftestmodule.__file__):\r\n../default/lib/python2.7/site-packages/_pytest/config.py:250: in register\r\n ret = super(PytestPluginManager, self).register(plugin, name)\r\n../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:371: in register\r\n hook._maybe_apply_history(hookimpl)\r\n../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:768: in _maybe_apply_history\r\n res = self._hookexec(self, [method], kwargs)\r\n../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:339: in _hookexec\r\n return self._inner_hookexec(hook, methods, kwargs)\r\n../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:334: in <lambda>\r\n _MultiCall(methods, kwargs, hook.spec_opts).execute()\r\n../default/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:614: in execute\r\n res = hook_impl.function(*args)\r\nrequirements/temporary_venv/lib/python2.7/site-packages/tests/contrib/appengine/conftest.py:45: in pytest_configure\r\n if config.getoption('gae_sdk') is not None:\r\n../default/lib/python2.7/site-packages/_pytest/config.py:1195: in getoption\r\n raise ValueError(\"no option named %r\" % (name,))\r\nE ValueError: no option named 'gae_sdk'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\noutputs the frozen packages\n\"\"\"\nimport sys\nimport os\nimport argparse\nimport subprocess\nparser = argparse.ArgumentParser(description=__doc__.strip())\nparser.add_argument('--venv', default='requirements/temporary_venv')\nparser.add_argument(\n \"--template\", default=\"requirements/template.txt\",)\nparser.add_argument(\n \"--out\", default=sys.stdout, type=argparse.FileType('w'),\n help='the file where packages should be written to')\n\n\ndef main(args):\n if not os.path.isdir(args.venv):\n subprocess.check_call([\n sys.executable, '-m', 'virtualenv', args.venv\n ])\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'),\n 'install', '-U', '-r', args.template])\n\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'), 'freeze'\n ], stdout=args.out)\n\n\nif __name__ == '__main__':\n main(parser.parse_args())\n", "path": "requirements/freeze.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"\noutputs the frozen packages\n\"\"\"\nfrom __future__ import print_function\nimport sys\nimport os\nimport argparse\nimport subprocess\nimport tempfile\nimport shutil\nparser = argparse.ArgumentParser(description=__doc__.strip())\nparser.add_argument('--venv', default=None)\nparser.add_argument('--keep-venv', action='store_true')\nparser.add_argument(\n \"--template\", default=\"requirements/template.txt\",)\nparser.add_argument(\n \"--out\", default=None,\n help='the file where packages should be written to')\n\n\ndef main(args):\n if args.venv is None:\n args.venv = tempfile.mkdtemp(suffix='-miq-QE-rebuild-venv')\n\n try:\n if not os.path.isdir(os.path.join(args.venv, 'bin')):\n subprocess.check_call([\n sys.executable, '-m', 'virtualenv', args.venv\n ])\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'),\n 'install', '-U', '-r', args.template])\n\n if args.out is None:\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'), 'freeze'\n ], stdout=sys.stdout)\n else:\n with open(args.out) as out:\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'), 'freeze'\n ], stdout=out)\n\n subprocess.check_call([\n os.path.join(args.venv, 'bin/pip'), 'freeze'\n ], stdout=args.out)\n finally:\n if not args.keep_venv:\n shutil.rmtree(args.venv)\n\n\nif __name__ == '__main__':\n main(parser.parse_args())\n", "path": "requirements/freeze.py"}]} | 1,400 | 558 |
gh_patches_debug_26060 | rasdani/github-patches | git_diff | doccano__doccano-2099 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken: Importing and Exporting SequenceLabeling projects with relations
How to reproduce the behaviour
---------
<!-- Before submitting an issue, make sure to check the docs and closed issues and FAQ to see if any of the solutions work for you. https://github.com/doccano/doccano/wiki/Frequently-Asked-Questions -->
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Your Environment
---------
<!-- Include details of your environment.-->
* Operating System: Dockeer
* Python Version Used: 3.8
* When you install doccano: 11/1/22
* How did you install doccano (Heroku button etc): docker-compose
I observed issues with the UI and interacting with relation labels. I am able create a relation label between two span labels in the UI, however the relation array get exported empty when going through the Export Dataset -> JSONL(relation) path. Furthermore, issues occur when trying to import relations as well. The import dataset flow only takes one "Column Label" field. When that is set to label, all of the span label and relation label info are uploaded as metadata.

If the "Column Label" field is set to "entities" the span labels are imported and only the relation label data is uploaded as metadata.

The first goal would be that the export process, exports in the format displayed when you select the JSONL(relation) option from Export Dataset.
ie.
```
{
"text": "Google was founded on September 4, 1998, by Larry Page and Sergey Brin.",
"entities": [
{
"id": 0,
"start_offset": 0,
"end_offset": 6,
"label": "ORG"
},
{
"id": 1,
"start_offset": 22,
"end_offset": 39,
"label": "DATE"
},
{
"id": 2,
"start_offset": 44,
"end_offset": 54,
"label": "PERSON"
},
{
"id": 3,
"start_offset": 59,
"end_offset": 70,
"label": "PERSON"
}
],
"relations": [
{
"id": 0,
"from_id": 0,
"to_id": 1,
"type": "foundedAt"
},
{
"id": 1,
"from_id": 0,
"to_id": 2,
"type": "foundedBy"
},
{
"id": 2,
"from_id": 0,
"to_id": 3,
"type": "foundedBy"
}
]
}
```
The second goal would be the ability to upload span labels and relation labels. Basically, Import Dataset should work with the Export Dataset -> JSONL(relation) results. I'll include a JSONL testing file for imports.
[relation_import_sample.zip](https://github.com/doccano/doccano/files/9913661/relation_import_sample.zip)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/data_import/pipeline/label.py`
Content:
```
1 import abc
2 import uuid
3 from typing import Any, Optional
4
5 from pydantic import UUID4, BaseModel, ConstrainedStr, NonNegativeInt, root_validator
6
7 from .label_types import LabelTypes
8 from examples.models import Example
9 from label_types.models import CategoryType, LabelType, RelationType, SpanType
10 from labels.models import Category as CategoryModel
11 from labels.models import Label as LabelModel
12 from labels.models import Relation as RelationModel
13 from labels.models import Span as SpanModel
14 from labels.models import TextLabel as TextLabelModel
15 from projects.models import Project
16
17
18 class NonEmptyStr(ConstrainedStr):
19 min_length = 1
20
21
22 class Label(BaseModel, abc.ABC):
23 id: int = -1
24 uuid: UUID4
25 example_uuid: UUID4
26
27 def __init__(self, **data):
28 data["uuid"] = uuid.uuid4()
29 super().__init__(**data)
30
31 @abc.abstractmethod
32 def __lt__(self, other):
33 raise NotImplementedError()
34
35 @classmethod
36 def parse(cls, example_uuid: UUID4, obj: Any):
37 raise NotImplementedError()
38
39 @abc.abstractmethod
40 def create_type(self, project: Project) -> Optional[LabelType]:
41 raise NotImplementedError()
42
43 @abc.abstractmethod
44 def create(self, user, example: Example, types: LabelTypes, **kwargs) -> LabelModel:
45 raise NotImplementedError
46
47 def __hash__(self):
48 return hash(tuple(self.dict()))
49
50
51 class CategoryLabel(Label):
52 label: NonEmptyStr
53
54 def __lt__(self, other):
55 return self.label < other.label
56
57 @classmethod
58 def parse(cls, example_uuid: UUID4, obj: Any):
59 return cls(example_uuid=example_uuid, label=obj)
60
61 def create_type(self, project: Project) -> Optional[LabelType]:
62 return CategoryType(text=self.label, project=project)
63
64 def create(self, user, example: Example, types: LabelTypes, **kwargs):
65 return CategoryModel(uuid=self.uuid, user=user, example=example, label=types[self.label])
66
67
68 class SpanLabel(Label):
69 label: NonEmptyStr
70 start_offset: NonNegativeInt
71 end_offset: NonNegativeInt
72
73 def __lt__(self, other):
74 return self.start_offset < other.start_offset
75
76 @root_validator
77 def check_start_offset_is_less_than_end_offset(cls, values):
78 start_offset, end_offset = values.get("start_offset"), values.get("end_offset")
79 if start_offset >= end_offset:
80 raise ValueError("start_offset must be less than end_offset.")
81 return values
82
83 @classmethod
84 def parse(cls, example_uuid: UUID4, obj: Any):
85 if isinstance(obj, list) or isinstance(obj, tuple):
86 columns = ["start_offset", "end_offset", "label"]
87 obj = zip(columns, obj)
88 return cls(example_uuid=example_uuid, **dict(obj))
89 elif isinstance(obj, dict):
90 return cls(example_uuid=example_uuid, **obj)
91 raise ValueError("SpanLabel.parse()")
92
93 def create_type(self, project: Project) -> Optional[LabelType]:
94 return SpanType(text=self.label, project=project)
95
96 def create(self, user, example: Example, types: LabelTypes, **kwargs):
97 return SpanModel(
98 uuid=self.uuid,
99 user=user,
100 example=example,
101 start_offset=self.start_offset,
102 end_offset=self.end_offset,
103 label=types[self.label],
104 )
105
106
107 class TextLabel(Label):
108 text: NonEmptyStr
109
110 def __lt__(self, other):
111 return self.text < other.text
112
113 @classmethod
114 def parse(cls, example_uuid: UUID4, obj: Any):
115 return cls(example_uuid=example_uuid, text=obj)
116
117 def create_type(self, project: Project) -> Optional[LabelType]:
118 return None
119
120 def create(self, user, example: Example, types: LabelTypes, **kwargs):
121 return TextLabelModel(uuid=self.uuid, user=user, example=example, text=self.text)
122
123
124 class RelationLabel(Label):
125 from_id: int
126 to_id: int
127 type: NonEmptyStr
128
129 def __lt__(self, other):
130 return self.from_id < other.from_id
131
132 @classmethod
133 def parse(cls, example_uuid: UUID4, obj: Any):
134 return cls(example_uuid=example_uuid, **obj)
135
136 def create_type(self, project: Project) -> Optional[LabelType]:
137 return RelationType(text=self.type, project=project)
138
139 def create(self, user, example: Example, types: LabelTypes, **kwargs):
140 return RelationModel(
141 uuid=self.uuid,
142 user=user,
143 example=example,
144 type=types[self.type],
145 from_id=kwargs["id_to_span"][self.from_id],
146 to_id=kwargs["id_to_span"][self.to_id],
147 )
148
```
Path: `backend/data_import/pipeline/labels.py`
Content:
```
1 import abc
2 from itertools import groupby
3 from typing import Dict, List
4
5 from .examples import Examples
6 from .label import Label
7 from .label_types import LabelTypes
8 from labels.models import Category as CategoryModel
9 from labels.models import Label as LabelModel
10 from labels.models import Relation as RelationModel
11 from labels.models import Span as SpanModel
12 from labels.models import TextLabel as TextLabelModel
13 from projects.models import Project
14
15
16 class Labels(abc.ABC):
17 label_model = LabelModel
18
19 def __init__(self, labels: List[Label], types: LabelTypes):
20 self.labels = labels
21 self.types = types
22
23 def __len__(self) -> int:
24 return len(self.labels)
25
26 def clean(self, project: Project):
27 pass
28
29 def save_types(self, project: Project):
30 types = [label.create_type(project) for label in self.labels]
31 filtered_types = list(filter(None, types))
32 self.types.save(filtered_types)
33 self.types.update(project)
34
35 def save(self, user, examples: Examples, **kwargs):
36 labels = [
37 label.create(user, examples[label.example_uuid], self.types, **kwargs)
38 for label in self.labels
39 if label.example_uuid in examples
40 ]
41 self.label_model.objects.bulk_create(labels)
42
43
44 class Categories(Labels):
45 label_model = CategoryModel
46
47 def clean(self, project: Project):
48 exclusive = getattr(project, "single_class_classification", False)
49 if exclusive:
50 groups = groupby(self.labels, lambda label: label.example_uuid)
51 self.labels = [next(group) for _, group in groups]
52
53
54 class Spans(Labels):
55 label_model = SpanModel
56
57 def clean(self, project: Project):
58 allow_overlapping = getattr(project, "allow_overlapping", False)
59 if allow_overlapping:
60 return
61 spans = []
62 groups = groupby(self.labels, lambda label: label.example_uuid)
63 for _, group in groups:
64 labels = sorted(group)
65 last_offset = -1
66 for label in labels:
67 if getattr(label, "start_offset") >= last_offset:
68 last_offset = getattr(label, "end_offset")
69 spans.append(label)
70 self.labels = spans
71
72 @property
73 def id_to_span(self) -> Dict[int, SpanModel]:
74 span_uuids = [str(label.uuid) for label in self.labels]
75 spans = SpanModel.objects.filter(uuid__in=span_uuids)
76 uuid_to_span = {span.uuid: span for span in spans}
77 return {span.id: uuid_to_span[span.uuid] for span in self.labels}
78
79
80 class Texts(Labels):
81 label_model = TextLabelModel
82
83
84 class Relations(Labels):
85 label_model = RelationModel
86
87 def save(self, user, examples: Examples, **kwargs):
88 id_to_span = kwargs["spans"].id_to_span
89 super().save(user, examples, id_to_span=id_to_span)
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/data_import/pipeline/label.py b/backend/data_import/pipeline/label.py
--- a/backend/data_import/pipeline/label.py
+++ b/backend/data_import/pipeline/label.py
@@ -142,6 +142,6 @@
user=user,
example=example,
type=types[self.type],
- from_id=kwargs["id_to_span"][self.from_id],
- to_id=kwargs["id_to_span"][self.to_id],
+ from_id=kwargs["id_to_span"][(self.from_id, str(self.example_uuid))],
+ to_id=kwargs["id_to_span"][(self.to_id, str(self.example_uuid))],
)
diff --git a/backend/data_import/pipeline/labels.py b/backend/data_import/pipeline/labels.py
--- a/backend/data_import/pipeline/labels.py
+++ b/backend/data_import/pipeline/labels.py
@@ -1,6 +1,6 @@
import abc
from itertools import groupby
-from typing import Dict, List
+from typing import Dict, List, Tuple
from .examples import Examples
from .label import Label
@@ -70,11 +70,11 @@
self.labels = spans
@property
- def id_to_span(self) -> Dict[int, SpanModel]:
- span_uuids = [str(label.uuid) for label in self.labels]
- spans = SpanModel.objects.filter(uuid__in=span_uuids)
+ def id_to_span(self) -> Dict[Tuple[int, str], SpanModel]:
+ uuids = [str(span.uuid) for span in self.labels]
+ spans = SpanModel.objects.filter(uuid__in=uuids)
uuid_to_span = {span.uuid: span for span in spans}
- return {span.id: uuid_to_span[span.uuid] for span in self.labels}
+ return {(span.id, str(span.example_uuid)): uuid_to_span[span.uuid] for span in self.labels}
class Texts(Labels):
| {"golden_diff": "diff --git a/backend/data_import/pipeline/label.py b/backend/data_import/pipeline/label.py\n--- a/backend/data_import/pipeline/label.py\n+++ b/backend/data_import/pipeline/label.py\n@@ -142,6 +142,6 @@\n user=user,\n example=example,\n type=types[self.type],\n- from_id=kwargs[\"id_to_span\"][self.from_id],\n- to_id=kwargs[\"id_to_span\"][self.to_id],\n+ from_id=kwargs[\"id_to_span\"][(self.from_id, str(self.example_uuid))],\n+ to_id=kwargs[\"id_to_span\"][(self.to_id, str(self.example_uuid))],\n )\ndiff --git a/backend/data_import/pipeline/labels.py b/backend/data_import/pipeline/labels.py\n--- a/backend/data_import/pipeline/labels.py\n+++ b/backend/data_import/pipeline/labels.py\n@@ -1,6 +1,6 @@\n import abc\n from itertools import groupby\n-from typing import Dict, List\n+from typing import Dict, List, Tuple\n \n from .examples import Examples\n from .label import Label\n@@ -70,11 +70,11 @@\n self.labels = spans\n \n @property\n- def id_to_span(self) -> Dict[int, SpanModel]:\n- span_uuids = [str(label.uuid) for label in self.labels]\n- spans = SpanModel.objects.filter(uuid__in=span_uuids)\n+ def id_to_span(self) -> Dict[Tuple[int, str], SpanModel]:\n+ uuids = [str(span.uuid) for span in self.labels]\n+ spans = SpanModel.objects.filter(uuid__in=uuids)\n uuid_to_span = {span.uuid: span for span in spans}\n- return {span.id: uuid_to_span[span.uuid] for span in self.labels}\n+ return {(span.id, str(span.example_uuid)): uuid_to_span[span.uuid] for span in self.labels}\n \n \n class Texts(Labels):\n", "issue": "Broken: Importing and Exporting SequenceLabeling projects with relations\nHow to reproduce the behaviour\r\n---------\r\n<!-- Before submitting an issue, make sure to check the docs and closed issues and FAQ to see if any of the solutions work for you. https://github.com/doccano/doccano/wiki/Frequently-Asked-Questions -->\r\n\r\n<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->\r\n\r\nYour Environment\r\n---------\r\n<!-- Include details of your environment.-->\r\n* Operating System: Dockeer\r\n* Python Version Used: 3.8\r\n* When you install doccano: 11/1/22\r\n* How did you install doccano (Heroku button etc): docker-compose\r\n\r\nI observed issues with the UI and interacting with relation labels. I am able create a relation label between two span labels in the UI, however the relation array get exported empty when going through the Export Dataset -> JSONL(relation) path. Furthermore, issues occur when trying to import relations as well. The import dataset flow only takes one \"Column Label\" field. When that is set to label, all of the span label and relation label info are uploaded as metadata. \r\n\r\n\r\n\r\nIf the \"Column Label\" field is set to \"entities\" the span labels are imported and only the relation label data is uploaded as metadata. \r\n\r\n\r\n\r\nThe first goal would be that the export process, exports in the format displayed when you select the JSONL(relation) option from Export Dataset.\r\n\r\nie.\r\n\r\n```\r\n{\r\n \"text\": \"Google was founded on September 4, 1998, by Larry Page and Sergey Brin.\",\r\n \"entities\": [\r\n {\r\n \"id\": 0,\r\n \"start_offset\": 0,\r\n \"end_offset\": 6,\r\n \"label\": \"ORG\"\r\n },\r\n {\r\n \"id\": 1,\r\n \"start_offset\": 22,\r\n \"end_offset\": 39,\r\n \"label\": \"DATE\"\r\n },\r\n {\r\n \"id\": 2,\r\n \"start_offset\": 44,\r\n \"end_offset\": 54,\r\n \"label\": \"PERSON\"\r\n },\r\n {\r\n \"id\": 3,\r\n \"start_offset\": 59,\r\n \"end_offset\": 70,\r\n \"label\": \"PERSON\"\r\n }\r\n ],\r\n \"relations\": [\r\n {\r\n \"id\": 0,\r\n \"from_id\": 0,\r\n \"to_id\": 1,\r\n \"type\": \"foundedAt\"\r\n },\r\n {\r\n \"id\": 1,\r\n \"from_id\": 0,\r\n \"to_id\": 2,\r\n \"type\": \"foundedBy\"\r\n },\r\n {\r\n \"id\": 2,\r\n \"from_id\": 0,\r\n \"to_id\": 3,\r\n \"type\": \"foundedBy\"\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThe second goal would be the ability to upload span labels and relation labels. Basically, Import Dataset should work with the Export Dataset -> JSONL(relation) results. I'll include a JSONL testing file for imports.\r\n\r\n[relation_import_sample.zip](https://github.com/doccano/doccano/files/9913661/relation_import_sample.zip)\n", "before_files": [{"content": "import abc\nimport uuid\nfrom typing import Any, Optional\n\nfrom pydantic import UUID4, BaseModel, ConstrainedStr, NonNegativeInt, root_validator\n\nfrom .label_types import LabelTypes\nfrom examples.models import Example\nfrom label_types.models import CategoryType, LabelType, RelationType, SpanType\nfrom labels.models import Category as CategoryModel\nfrom labels.models import Label as LabelModel\nfrom labels.models import Relation as RelationModel\nfrom labels.models import Span as SpanModel\nfrom labels.models import TextLabel as TextLabelModel\nfrom projects.models import Project\n\n\nclass NonEmptyStr(ConstrainedStr):\n min_length = 1\n\n\nclass Label(BaseModel, abc.ABC):\n id: int = -1\n uuid: UUID4\n example_uuid: UUID4\n\n def __init__(self, **data):\n data[\"uuid\"] = uuid.uuid4()\n super().__init__(**data)\n\n @abc.abstractmethod\n def __lt__(self, other):\n raise NotImplementedError()\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n raise NotImplementedError()\n\n @abc.abstractmethod\n def create_type(self, project: Project) -> Optional[LabelType]:\n raise NotImplementedError()\n\n @abc.abstractmethod\n def create(self, user, example: Example, types: LabelTypes, **kwargs) -> LabelModel:\n raise NotImplementedError\n\n def __hash__(self):\n return hash(tuple(self.dict()))\n\n\nclass CategoryLabel(Label):\n label: NonEmptyStr\n\n def __lt__(self, other):\n return self.label < other.label\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, label=obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return CategoryType(text=self.label, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return CategoryModel(uuid=self.uuid, user=user, example=example, label=types[self.label])\n\n\nclass SpanLabel(Label):\n label: NonEmptyStr\n start_offset: NonNegativeInt\n end_offset: NonNegativeInt\n\n def __lt__(self, other):\n return self.start_offset < other.start_offset\n\n @root_validator\n def check_start_offset_is_less_than_end_offset(cls, values):\n start_offset, end_offset = values.get(\"start_offset\"), values.get(\"end_offset\")\n if start_offset >= end_offset:\n raise ValueError(\"start_offset must be less than end_offset.\")\n return values\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n if isinstance(obj, list) or isinstance(obj, tuple):\n columns = [\"start_offset\", \"end_offset\", \"label\"]\n obj = zip(columns, obj)\n return cls(example_uuid=example_uuid, **dict(obj))\n elif isinstance(obj, dict):\n return cls(example_uuid=example_uuid, **obj)\n raise ValueError(\"SpanLabel.parse()\")\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return SpanType(text=self.label, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return SpanModel(\n uuid=self.uuid,\n user=user,\n example=example,\n start_offset=self.start_offset,\n end_offset=self.end_offset,\n label=types[self.label],\n )\n\n\nclass TextLabel(Label):\n text: NonEmptyStr\n\n def __lt__(self, other):\n return self.text < other.text\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, text=obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return None\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return TextLabelModel(uuid=self.uuid, user=user, example=example, text=self.text)\n\n\nclass RelationLabel(Label):\n from_id: int\n to_id: int\n type: NonEmptyStr\n\n def __lt__(self, other):\n return self.from_id < other.from_id\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, **obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return RelationType(text=self.type, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return RelationModel(\n uuid=self.uuid,\n user=user,\n example=example,\n type=types[self.type],\n from_id=kwargs[\"id_to_span\"][self.from_id],\n to_id=kwargs[\"id_to_span\"][self.to_id],\n )\n", "path": "backend/data_import/pipeline/label.py"}, {"content": "import abc\nfrom itertools import groupby\nfrom typing import Dict, List\n\nfrom .examples import Examples\nfrom .label import Label\nfrom .label_types import LabelTypes\nfrom labels.models import Category as CategoryModel\nfrom labels.models import Label as LabelModel\nfrom labels.models import Relation as RelationModel\nfrom labels.models import Span as SpanModel\nfrom labels.models import TextLabel as TextLabelModel\nfrom projects.models import Project\n\n\nclass Labels(abc.ABC):\n label_model = LabelModel\n\n def __init__(self, labels: List[Label], types: LabelTypes):\n self.labels = labels\n self.types = types\n\n def __len__(self) -> int:\n return len(self.labels)\n\n def clean(self, project: Project):\n pass\n\n def save_types(self, project: Project):\n types = [label.create_type(project) for label in self.labels]\n filtered_types = list(filter(None, types))\n self.types.save(filtered_types)\n self.types.update(project)\n\n def save(self, user, examples: Examples, **kwargs):\n labels = [\n label.create(user, examples[label.example_uuid], self.types, **kwargs)\n for label in self.labels\n if label.example_uuid in examples\n ]\n self.label_model.objects.bulk_create(labels)\n\n\nclass Categories(Labels):\n label_model = CategoryModel\n\n def clean(self, project: Project):\n exclusive = getattr(project, \"single_class_classification\", False)\n if exclusive:\n groups = groupby(self.labels, lambda label: label.example_uuid)\n self.labels = [next(group) for _, group in groups]\n\n\nclass Spans(Labels):\n label_model = SpanModel\n\n def clean(self, project: Project):\n allow_overlapping = getattr(project, \"allow_overlapping\", False)\n if allow_overlapping:\n return\n spans = []\n groups = groupby(self.labels, lambda label: label.example_uuid)\n for _, group in groups:\n labels = sorted(group)\n last_offset = -1\n for label in labels:\n if getattr(label, \"start_offset\") >= last_offset:\n last_offset = getattr(label, \"end_offset\")\n spans.append(label)\n self.labels = spans\n\n @property\n def id_to_span(self) -> Dict[int, SpanModel]:\n span_uuids = [str(label.uuid) for label in self.labels]\n spans = SpanModel.objects.filter(uuid__in=span_uuids)\n uuid_to_span = {span.uuid: span for span in spans}\n return {span.id: uuid_to_span[span.uuid] for span in self.labels}\n\n\nclass Texts(Labels):\n label_model = TextLabelModel\n\n\nclass Relations(Labels):\n label_model = RelationModel\n\n def save(self, user, examples: Examples, **kwargs):\n id_to_span = kwargs[\"spans\"].id_to_span\n super().save(user, examples, id_to_span=id_to_span)\n", "path": "backend/data_import/pipeline/labels.py"}], "after_files": [{"content": "import abc\nimport uuid\nfrom typing import Any, Optional\n\nfrom pydantic import UUID4, BaseModel, ConstrainedStr, NonNegativeInt, root_validator\n\nfrom .label_types import LabelTypes\nfrom examples.models import Example\nfrom label_types.models import CategoryType, LabelType, RelationType, SpanType\nfrom labels.models import Category as CategoryModel\nfrom labels.models import Label as LabelModel\nfrom labels.models import Relation as RelationModel\nfrom labels.models import Span as SpanModel\nfrom labels.models import TextLabel as TextLabelModel\nfrom projects.models import Project\n\n\nclass NonEmptyStr(ConstrainedStr):\n min_length = 1\n\n\nclass Label(BaseModel, abc.ABC):\n id: int = -1\n uuid: UUID4\n example_uuid: UUID4\n\n def __init__(self, **data):\n data[\"uuid\"] = uuid.uuid4()\n super().__init__(**data)\n\n @abc.abstractmethod\n def __lt__(self, other):\n raise NotImplementedError()\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n raise NotImplementedError()\n\n @abc.abstractmethod\n def create_type(self, project: Project) -> Optional[LabelType]:\n raise NotImplementedError()\n\n @abc.abstractmethod\n def create(self, user, example: Example, types: LabelTypes, **kwargs) -> LabelModel:\n raise NotImplementedError\n\n def __hash__(self):\n return hash(tuple(self.dict()))\n\n\nclass CategoryLabel(Label):\n label: NonEmptyStr\n\n def __lt__(self, other):\n return self.label < other.label\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, label=obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return CategoryType(text=self.label, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return CategoryModel(uuid=self.uuid, user=user, example=example, label=types[self.label])\n\n\nclass SpanLabel(Label):\n label: NonEmptyStr\n start_offset: NonNegativeInt\n end_offset: NonNegativeInt\n\n def __lt__(self, other):\n return self.start_offset < other.start_offset\n\n @root_validator\n def check_start_offset_is_less_than_end_offset(cls, values):\n start_offset, end_offset = values.get(\"start_offset\"), values.get(\"end_offset\")\n if start_offset >= end_offset:\n raise ValueError(\"start_offset must be less than end_offset.\")\n return values\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n if isinstance(obj, list) or isinstance(obj, tuple):\n columns = [\"start_offset\", \"end_offset\", \"label\"]\n obj = zip(columns, obj)\n return cls(example_uuid=example_uuid, **dict(obj))\n elif isinstance(obj, dict):\n return cls(example_uuid=example_uuid, **obj)\n raise ValueError(\"SpanLabel.parse()\")\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return SpanType(text=self.label, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return SpanModel(\n uuid=self.uuid,\n user=user,\n example=example,\n start_offset=self.start_offset,\n end_offset=self.end_offset,\n label=types[self.label],\n )\n\n\nclass TextLabel(Label):\n text: NonEmptyStr\n\n def __lt__(self, other):\n return self.text < other.text\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, text=obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return None\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return TextLabelModel(uuid=self.uuid, user=user, example=example, text=self.text)\n\n\nclass RelationLabel(Label):\n from_id: int\n to_id: int\n type: NonEmptyStr\n\n def __lt__(self, other):\n return self.from_id < other.from_id\n\n @classmethod\n def parse(cls, example_uuid: UUID4, obj: Any):\n return cls(example_uuid=example_uuid, **obj)\n\n def create_type(self, project: Project) -> Optional[LabelType]:\n return RelationType(text=self.type, project=project)\n\n def create(self, user, example: Example, types: LabelTypes, **kwargs):\n return RelationModel(\n uuid=self.uuid,\n user=user,\n example=example,\n type=types[self.type],\n from_id=kwargs[\"id_to_span\"][(self.from_id, str(self.example_uuid))],\n to_id=kwargs[\"id_to_span\"][(self.to_id, str(self.example_uuid))],\n )\n", "path": "backend/data_import/pipeline/label.py"}, {"content": "import abc\nfrom itertools import groupby\nfrom typing import Dict, List, Tuple\n\nfrom .examples import Examples\nfrom .label import Label\nfrom .label_types import LabelTypes\nfrom labels.models import Category as CategoryModel\nfrom labels.models import Label as LabelModel\nfrom labels.models import Relation as RelationModel\nfrom labels.models import Span as SpanModel\nfrom labels.models import TextLabel as TextLabelModel\nfrom projects.models import Project\n\n\nclass Labels(abc.ABC):\n label_model = LabelModel\n\n def __init__(self, labels: List[Label], types: LabelTypes):\n self.labels = labels\n self.types = types\n\n def __len__(self) -> int:\n return len(self.labels)\n\n def clean(self, project: Project):\n pass\n\n def save_types(self, project: Project):\n types = [label.create_type(project) for label in self.labels]\n filtered_types = list(filter(None, types))\n self.types.save(filtered_types)\n self.types.update(project)\n\n def save(self, user, examples: Examples, **kwargs):\n labels = [\n label.create(user, examples[label.example_uuid], self.types, **kwargs)\n for label in self.labels\n if label.example_uuid in examples\n ]\n self.label_model.objects.bulk_create(labels)\n\n\nclass Categories(Labels):\n label_model = CategoryModel\n\n def clean(self, project: Project):\n exclusive = getattr(project, \"single_class_classification\", False)\n if exclusive:\n groups = groupby(self.labels, lambda label: label.example_uuid)\n self.labels = [next(group) for _, group in groups]\n\n\nclass Spans(Labels):\n label_model = SpanModel\n\n def clean(self, project: Project):\n allow_overlapping = getattr(project, \"allow_overlapping\", False)\n if allow_overlapping:\n return\n spans = []\n groups = groupby(self.labels, lambda label: label.example_uuid)\n for _, group in groups:\n labels = sorted(group)\n last_offset = -1\n for label in labels:\n if getattr(label, \"start_offset\") >= last_offset:\n last_offset = getattr(label, \"end_offset\")\n spans.append(label)\n self.labels = spans\n\n @property\n def id_to_span(self) -> Dict[Tuple[int, str], SpanModel]:\n uuids = [str(span.uuid) for span in self.labels]\n spans = SpanModel.objects.filter(uuid__in=uuids)\n uuid_to_span = {span.uuid: span for span in spans}\n return {(span.id, str(span.example_uuid)): uuid_to_span[span.uuid] for span in self.labels}\n\n\nclass Texts(Labels):\n label_model = TextLabelModel\n\n\nclass Relations(Labels):\n label_model = RelationModel\n\n def save(self, user, examples: Examples, **kwargs):\n id_to_span = kwargs[\"spans\"].id_to_span\n super().save(user, examples, id_to_span=id_to_span)\n", "path": "backend/data_import/pipeline/labels.py"}]} | 3,295 | 431 |
gh_patches_debug_36446 | rasdani/github-patches | git_diff | mkdocs__mkdocs-3282 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Disable color output in terminal when NO_COLOR is set
It would be nice to turn off terminal colors sometimes, but there currently is no option to do that.
https://no-color.org/ is the closest thing to a standard I could find, and they recommend disabling color if the `NO_COLOR` environment variable is set (to any value).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/__main__.py`
Content:
```
1 #!/usr/bin/env python
2
3 from __future__ import annotations
4
5 import logging
6 import os
7 import shutil
8 import sys
9 import textwrap
10 import traceback
11 import warnings
12
13 import click
14
15 from mkdocs import __version__, config, utils
16
17 if sys.platform.startswith("win"):
18 try:
19 import colorama
20 except ImportError:
21 pass
22 else:
23 colorama.init()
24
25 log = logging.getLogger(__name__)
26
27
28 def _showwarning(message, category, filename, lineno, file=None, line=None):
29 try:
30 # Last stack frames:
31 # * ...
32 # * Location of call to deprecated function <-- include this
33 # * Location of call to warn() <-- include this
34 # * (stdlib) Location of call to showwarning function
35 # * (this function) Location of call to extract_stack()
36 stack = [frame for frame in traceback.extract_stack() if frame.line][-4:-2]
37 # Make sure the actual affected file's name is still present (the case of syntax warning):
38 if not any(frame.filename == filename for frame in stack):
39 stack = stack[-1:] + [traceback.FrameSummary(filename, lineno, '')]
40
41 tb = ''.join(traceback.format_list(stack))
42 except Exception:
43 tb = f' File "{filename}", line {lineno}'
44
45 log.info(f'{category.__name__}: {message}\n{tb}')
46
47
48 def _enable_warnings():
49 from mkdocs.commands import build
50
51 build.log.addFilter(utils.DuplicateFilter())
52
53 warnings.simplefilter('module', DeprecationWarning)
54 warnings.showwarning = _showwarning
55
56
57 class ColorFormatter(logging.Formatter):
58 colors = {
59 'CRITICAL': 'red',
60 'ERROR': 'red',
61 'WARNING': 'yellow',
62 'DEBUG': 'blue',
63 }
64
65 text_wrapper = textwrap.TextWrapper(
66 width=shutil.get_terminal_size(fallback=(0, 0)).columns,
67 replace_whitespace=False,
68 break_long_words=False,
69 break_on_hyphens=False,
70 initial_indent=' ' * 12,
71 subsequent_indent=' ' * 12,
72 )
73
74 def format(self, record):
75 message = super().format(record)
76 prefix = f'{record.levelname:<8} - '
77 if record.levelname in self.colors:
78 prefix = click.style(prefix, fg=self.colors[record.levelname])
79 if self.text_wrapper.width:
80 # Only wrap text if a terminal width was detected
81 msg = '\n'.join(self.text_wrapper.fill(line) for line in message.splitlines())
82 # Prepend prefix after wrapping so that color codes don't affect length
83 return prefix + msg[12:]
84 return prefix + message
85
86
87 class State:
88 """Maintain logging level."""
89
90 def __init__(self, log_name='mkdocs', level=logging.INFO):
91 self.logger = logging.getLogger(log_name)
92 # Don't restrict level on logger; use handler
93 self.logger.setLevel(1)
94 self.logger.propagate = False
95
96 self.stream = logging.StreamHandler()
97 self.stream.setFormatter(ColorFormatter())
98 self.stream.setLevel(level)
99 self.stream.name = 'MkDocsStreamHandler'
100 self.logger.addHandler(self.stream)
101
102 def __del__(self):
103 self.logger.removeHandler(self.stream)
104
105
106 pass_state = click.make_pass_decorator(State, ensure=True)
107
108 clean_help = "Remove old files from the site_dir before building (the default)."
109 config_help = (
110 "Provide a specific MkDocs config. This can be a file name, or '-' to read from stdin."
111 )
112 dev_addr_help = "IP address and port to serve documentation locally (default: localhost:8000)"
113 strict_help = "Enable strict mode. This will cause MkDocs to abort the build on any warnings."
114 theme_help = "The theme to use when building your documentation."
115 theme_choices = sorted(utils.get_theme_names())
116 site_dir_help = "The directory to output the result of the documentation build."
117 use_directory_urls_help = "Use directory URLs when building pages (the default)."
118 reload_help = "Enable the live reloading in the development server (this is the default)"
119 no_reload_help = "Disable the live reloading in the development server."
120 serve_dirty_help = "Only re-build files that have changed."
121 serve_clean_help = (
122 "Build the site without any effects of `mkdocs serve` - pure `mkdocs build`, then serve."
123 )
124 commit_message_help = (
125 "A commit message to use when committing to the "
126 "GitHub Pages remote branch. Commit {sha} and MkDocs {version} are available as expansions"
127 )
128 remote_branch_help = (
129 "The remote branch to commit to for GitHub Pages. This "
130 "overrides the value specified in config"
131 )
132 remote_name_help = (
133 "The remote name to commit to for GitHub Pages. This overrides the value specified in config"
134 )
135 force_help = "Force the push to the repository."
136 no_history_help = "Replace the whole Git history with one new commit."
137 ignore_version_help = (
138 "Ignore check that build is not being deployed with an older version of MkDocs."
139 )
140 watch_theme_help = (
141 "Include the theme in list of files to watch for live reloading. "
142 "Ignored when live reload is not used."
143 )
144 shell_help = "Use the shell when invoking Git."
145 watch_help = "A directory or file to watch for live reloading. Can be supplied multiple times."
146 projects_file_help = (
147 "URL or local path of the registry file that declares all known MkDocs-related projects."
148 )
149
150
151 def add_options(*opts):
152 def inner(f):
153 for i in reversed(opts):
154 f = i(f)
155 return f
156
157 return inner
158
159
160 def verbose_option(f):
161 def callback(ctx, param, value):
162 state = ctx.ensure_object(State)
163 if value:
164 state.stream.setLevel(logging.DEBUG)
165
166 return click.option(
167 '-v',
168 '--verbose',
169 is_flag=True,
170 expose_value=False,
171 help='Enable verbose output',
172 callback=callback,
173 )(f)
174
175
176 def quiet_option(f):
177 def callback(ctx, param, value):
178 state = ctx.ensure_object(State)
179 if value:
180 state.stream.setLevel(logging.ERROR)
181
182 return click.option(
183 '-q',
184 '--quiet',
185 is_flag=True,
186 expose_value=False,
187 help='Silence warnings',
188 callback=callback,
189 )(f)
190
191
192 common_options = add_options(quiet_option, verbose_option)
193 common_config_options = add_options(
194 click.option('-f', '--config-file', type=click.File('rb'), help=config_help),
195 # Don't override config value if user did not specify --strict flag
196 # Conveniently, load_config drops None values
197 click.option('-s', '--strict/--no-strict', is_flag=True, default=None, help=strict_help),
198 click.option('-t', '--theme', type=click.Choice(theme_choices), help=theme_help),
199 # As with --strict, set the default to None so that this doesn't incorrectly
200 # override the config file
201 click.option(
202 '--use-directory-urls/--no-directory-urls',
203 is_flag=True,
204 default=None,
205 help=use_directory_urls_help,
206 ),
207 )
208
209 PYTHON_VERSION = f"{sys.version_info.major}.{sys.version_info.minor}"
210
211 PKG_DIR = os.path.dirname(os.path.abspath(__file__))
212
213
214 @click.group(context_settings=dict(help_option_names=['-h', '--help'], max_content_width=120))
215 @click.version_option(
216 __version__,
217 '-V',
218 '--version',
219 message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })',
220 )
221 @common_options
222 def cli():
223 """
224 MkDocs - Project documentation with Markdown.
225 """
226
227
228 @cli.command(name="serve")
229 @click.option('-a', '--dev-addr', help=dev_addr_help, metavar='<IP:PORT>')
230 @click.option('--livereload', 'livereload', flag_value='livereload', default=True, hidden=True)
231 @click.option('--no-livereload', 'livereload', flag_value='no-livereload', help=no_reload_help)
232 @click.option('--dirtyreload', 'build_type', flag_value='dirty', hidden=True)
233 @click.option('--dirty', 'build_type', flag_value='dirty', help=serve_dirty_help)
234 @click.option('-c', '--clean', 'build_type', flag_value='clean', help=serve_clean_help)
235 @click.option('--watch-theme', help=watch_theme_help, is_flag=True)
236 @click.option(
237 '-w', '--watch', help=watch_help, type=click.Path(exists=True), multiple=True, default=[]
238 )
239 @common_config_options
240 @common_options
241 def serve_command(**kwargs):
242 """Run the builtin development server"""
243 from mkdocs.commands import serve
244
245 _enable_warnings()
246 serve.serve(**kwargs)
247
248
249 @cli.command(name="build")
250 @click.option('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)
251 @common_config_options
252 @click.option('-d', '--site-dir', type=click.Path(), help=site_dir_help)
253 @common_options
254 def build_command(clean, **kwargs):
255 """Build the MkDocs documentation"""
256 from mkdocs.commands import build
257
258 _enable_warnings()
259 cfg = config.load_config(**kwargs)
260 cfg.plugins.on_startup(command='build', dirty=not clean)
261 try:
262 build.build(cfg, dirty=not clean)
263 finally:
264 cfg.plugins.on_shutdown()
265
266
267 @cli.command(name="gh-deploy")
268 @click.option('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)
269 @click.option('-m', '--message', help=commit_message_help)
270 @click.option('-b', '--remote-branch', help=remote_branch_help)
271 @click.option('-r', '--remote-name', help=remote_name_help)
272 @click.option('--force', is_flag=True, help=force_help)
273 @click.option('--no-history', is_flag=True, help=no_history_help)
274 @click.option('--ignore-version', is_flag=True, help=ignore_version_help)
275 @click.option('--shell', is_flag=True, help=shell_help)
276 @common_config_options
277 @click.option('-d', '--site-dir', type=click.Path(), help=site_dir_help)
278 @common_options
279 def gh_deploy_command(
280 clean, message, remote_branch, remote_name, force, no_history, ignore_version, shell, **kwargs
281 ):
282 """Deploy your documentation to GitHub Pages"""
283 from mkdocs.commands import build, gh_deploy
284
285 _enable_warnings()
286 cfg = config.load_config(remote_branch=remote_branch, remote_name=remote_name, **kwargs)
287 cfg.plugins.on_startup(command='gh-deploy', dirty=not clean)
288 try:
289 build.build(cfg, dirty=not clean)
290 finally:
291 cfg.plugins.on_shutdown()
292 gh_deploy.gh_deploy(
293 cfg,
294 message=message,
295 force=force,
296 no_history=no_history,
297 ignore_version=ignore_version,
298 shell=shell,
299 )
300
301
302 @cli.command(name="get-deps")
303 @verbose_option
304 @click.option('-f', '--config-file', type=click.File('rb'), help=config_help)
305 @click.option(
306 '-p',
307 '--projects-file',
308 default='https://raw.githubusercontent.com/mkdocs/catalog/main/projects.yaml',
309 help=projects_file_help,
310 show_default=True,
311 )
312 def get_deps_command(config_file, projects_file):
313 """Show required PyPI packages inferred from plugins in mkdocs.yml"""
314 from mkdocs.commands import get_deps
315
316 warning_counter = utils.CountHandler()
317 warning_counter.setLevel(logging.WARNING)
318 logging.getLogger('mkdocs').addHandler(warning_counter)
319
320 get_deps.get_deps(projects_file_url=projects_file, config_file_path=config_file)
321
322 if warning_counter.get_counts():
323 sys.exit(1)
324
325
326 @cli.command(name="new")
327 @click.argument("project_directory")
328 @common_options
329 def new_command(project_directory):
330 """Create a new MkDocs project"""
331 from mkdocs.commands import new
332
333 new.new(project_directory)
334
335
336 if __name__ == '__main__': # pragma: no cover
337 cli()
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/__main__.py b/mkdocs/__main__.py
--- a/mkdocs/__main__.py
+++ b/mkdocs/__main__.py
@@ -67,20 +67,20 @@
replace_whitespace=False,
break_long_words=False,
break_on_hyphens=False,
- initial_indent=' ' * 12,
- subsequent_indent=' ' * 12,
+ initial_indent=' ' * 11,
+ subsequent_indent=' ' * 11,
)
def format(self, record):
message = super().format(record)
- prefix = f'{record.levelname:<8} - '
+ prefix = f'{record.levelname:<8}- '
if record.levelname in self.colors:
prefix = click.style(prefix, fg=self.colors[record.levelname])
if self.text_wrapper.width:
# Only wrap text if a terminal width was detected
msg = '\n'.join(self.text_wrapper.fill(line) for line in message.splitlines())
# Prepend prefix after wrapping so that color codes don't affect length
- return prefix + msg[12:]
+ return prefix + msg[11:]
return prefix + message
@@ -189,6 +189,29 @@
)(f)
+def color_option(f):
+ def callback(ctx, param, value):
+ state = ctx.ensure_object(State)
+ if value is False or (
+ value is None
+ and (
+ not sys.stdout.isatty()
+ or os.environ.get('NO_COLOR')
+ or os.environ.get('TERM') == 'dumb'
+ )
+ ):
+ state.stream.setFormatter(logging.Formatter('%(levelname)-8s- %(message)s'))
+
+ return click.option(
+ '--color/--no-color',
+ is_flag=True,
+ default=None,
+ expose_value=False,
+ help="Force enable or disable color and wrapping for the output. Default is auto-detect.",
+ callback=callback,
+ )(f)
+
+
common_options = add_options(quiet_option, verbose_option)
common_config_options = add_options(
click.option('-f', '--config-file', type=click.File('rb'), help=config_help),
@@ -219,6 +242,7 @@
message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })',
)
@common_options
+@color_option
def cli():
"""
MkDocs - Project documentation with Markdown.
| {"golden_diff": "diff --git a/mkdocs/__main__.py b/mkdocs/__main__.py\n--- a/mkdocs/__main__.py\n+++ b/mkdocs/__main__.py\n@@ -67,20 +67,20 @@\n replace_whitespace=False,\n break_long_words=False,\n break_on_hyphens=False,\n- initial_indent=' ' * 12,\n- subsequent_indent=' ' * 12,\n+ initial_indent=' ' * 11,\n+ subsequent_indent=' ' * 11,\n )\n \n def format(self, record):\n message = super().format(record)\n- prefix = f'{record.levelname:<8} - '\n+ prefix = f'{record.levelname:<8}- '\n if record.levelname in self.colors:\n prefix = click.style(prefix, fg=self.colors[record.levelname])\n if self.text_wrapper.width:\n # Only wrap text if a terminal width was detected\n msg = '\\n'.join(self.text_wrapper.fill(line) for line in message.splitlines())\n # Prepend prefix after wrapping so that color codes don't affect length\n- return prefix + msg[12:]\n+ return prefix + msg[11:]\n return prefix + message\n \n \n@@ -189,6 +189,29 @@\n )(f)\n \n \n+def color_option(f):\n+ def callback(ctx, param, value):\n+ state = ctx.ensure_object(State)\n+ if value is False or (\n+ value is None\n+ and (\n+ not sys.stdout.isatty()\n+ or os.environ.get('NO_COLOR')\n+ or os.environ.get('TERM') == 'dumb'\n+ )\n+ ):\n+ state.stream.setFormatter(logging.Formatter('%(levelname)-8s- %(message)s'))\n+\n+ return click.option(\n+ '--color/--no-color',\n+ is_flag=True,\n+ default=None,\n+ expose_value=False,\n+ help=\"Force enable or disable color and wrapping for the output. Default is auto-detect.\",\n+ callback=callback,\n+ )(f)\n+\n+\n common_options = add_options(quiet_option, verbose_option)\n common_config_options = add_options(\n click.option('-f', '--config-file', type=click.File('rb'), help=config_help),\n@@ -219,6 +242,7 @@\n message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })',\n )\n @common_options\n+@color_option\n def cli():\n \"\"\"\n MkDocs - Project documentation with Markdown.\n", "issue": "Disable color output in terminal when NO_COLOR is set\nIt would be nice to turn off terminal colors sometimes, but there currently is no option to do that.\r\n\r\nhttps://no-color.org/ is the closest thing to a standard I could find, and they recommend disabling color if the `NO_COLOR` environment variable is set (to any value).\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nimport shutil\nimport sys\nimport textwrap\nimport traceback\nimport warnings\n\nimport click\n\nfrom mkdocs import __version__, config, utils\n\nif sys.platform.startswith(\"win\"):\n try:\n import colorama\n except ImportError:\n pass\n else:\n colorama.init()\n\nlog = logging.getLogger(__name__)\n\n\ndef _showwarning(message, category, filename, lineno, file=None, line=None):\n try:\n # Last stack frames:\n # * ...\n # * Location of call to deprecated function <-- include this\n # * Location of call to warn() <-- include this\n # * (stdlib) Location of call to showwarning function\n # * (this function) Location of call to extract_stack()\n stack = [frame for frame in traceback.extract_stack() if frame.line][-4:-2]\n # Make sure the actual affected file's name is still present (the case of syntax warning):\n if not any(frame.filename == filename for frame in stack):\n stack = stack[-1:] + [traceback.FrameSummary(filename, lineno, '')]\n\n tb = ''.join(traceback.format_list(stack))\n except Exception:\n tb = f' File \"{filename}\", line {lineno}'\n\n log.info(f'{category.__name__}: {message}\\n{tb}')\n\n\ndef _enable_warnings():\n from mkdocs.commands import build\n\n build.log.addFilter(utils.DuplicateFilter())\n\n warnings.simplefilter('module', DeprecationWarning)\n warnings.showwarning = _showwarning\n\n\nclass ColorFormatter(logging.Formatter):\n colors = {\n 'CRITICAL': 'red',\n 'ERROR': 'red',\n 'WARNING': 'yellow',\n 'DEBUG': 'blue',\n }\n\n text_wrapper = textwrap.TextWrapper(\n width=shutil.get_terminal_size(fallback=(0, 0)).columns,\n replace_whitespace=False,\n break_long_words=False,\n break_on_hyphens=False,\n initial_indent=' ' * 12,\n subsequent_indent=' ' * 12,\n )\n\n def format(self, record):\n message = super().format(record)\n prefix = f'{record.levelname:<8} - '\n if record.levelname in self.colors:\n prefix = click.style(prefix, fg=self.colors[record.levelname])\n if self.text_wrapper.width:\n # Only wrap text if a terminal width was detected\n msg = '\\n'.join(self.text_wrapper.fill(line) for line in message.splitlines())\n # Prepend prefix after wrapping so that color codes don't affect length\n return prefix + msg[12:]\n return prefix + message\n\n\nclass State:\n \"\"\"Maintain logging level.\"\"\"\n\n def __init__(self, log_name='mkdocs', level=logging.INFO):\n self.logger = logging.getLogger(log_name)\n # Don't restrict level on logger; use handler\n self.logger.setLevel(1)\n self.logger.propagate = False\n\n self.stream = logging.StreamHandler()\n self.stream.setFormatter(ColorFormatter())\n self.stream.setLevel(level)\n self.stream.name = 'MkDocsStreamHandler'\n self.logger.addHandler(self.stream)\n\n def __del__(self):\n self.logger.removeHandler(self.stream)\n\n\npass_state = click.make_pass_decorator(State, ensure=True)\n\nclean_help = \"Remove old files from the site_dir before building (the default).\"\nconfig_help = (\n \"Provide a specific MkDocs config. This can be a file name, or '-' to read from stdin.\"\n)\ndev_addr_help = \"IP address and port to serve documentation locally (default: localhost:8000)\"\nstrict_help = \"Enable strict mode. This will cause MkDocs to abort the build on any warnings.\"\ntheme_help = \"The theme to use when building your documentation.\"\ntheme_choices = sorted(utils.get_theme_names())\nsite_dir_help = \"The directory to output the result of the documentation build.\"\nuse_directory_urls_help = \"Use directory URLs when building pages (the default).\"\nreload_help = \"Enable the live reloading in the development server (this is the default)\"\nno_reload_help = \"Disable the live reloading in the development server.\"\nserve_dirty_help = \"Only re-build files that have changed.\"\nserve_clean_help = (\n \"Build the site without any effects of `mkdocs serve` - pure `mkdocs build`, then serve.\"\n)\ncommit_message_help = (\n \"A commit message to use when committing to the \"\n \"GitHub Pages remote branch. Commit {sha} and MkDocs {version} are available as expansions\"\n)\nremote_branch_help = (\n \"The remote branch to commit to for GitHub Pages. This \"\n \"overrides the value specified in config\"\n)\nremote_name_help = (\n \"The remote name to commit to for GitHub Pages. This overrides the value specified in config\"\n)\nforce_help = \"Force the push to the repository.\"\nno_history_help = \"Replace the whole Git history with one new commit.\"\nignore_version_help = (\n \"Ignore check that build is not being deployed with an older version of MkDocs.\"\n)\nwatch_theme_help = (\n \"Include the theme in list of files to watch for live reloading. \"\n \"Ignored when live reload is not used.\"\n)\nshell_help = \"Use the shell when invoking Git.\"\nwatch_help = \"A directory or file to watch for live reloading. Can be supplied multiple times.\"\nprojects_file_help = (\n \"URL or local path of the registry file that declares all known MkDocs-related projects.\"\n)\n\n\ndef add_options(*opts):\n def inner(f):\n for i in reversed(opts):\n f = i(f)\n return f\n\n return inner\n\n\ndef verbose_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.DEBUG)\n\n return click.option(\n '-v',\n '--verbose',\n is_flag=True,\n expose_value=False,\n help='Enable verbose output',\n callback=callback,\n )(f)\n\n\ndef quiet_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.ERROR)\n\n return click.option(\n '-q',\n '--quiet',\n is_flag=True,\n expose_value=False,\n help='Silence warnings',\n callback=callback,\n )(f)\n\n\ncommon_options = add_options(quiet_option, verbose_option)\ncommon_config_options = add_options(\n click.option('-f', '--config-file', type=click.File('rb'), help=config_help),\n # Don't override config value if user did not specify --strict flag\n # Conveniently, load_config drops None values\n click.option('-s', '--strict/--no-strict', is_flag=True, default=None, help=strict_help),\n click.option('-t', '--theme', type=click.Choice(theme_choices), help=theme_help),\n # As with --strict, set the default to None so that this doesn't incorrectly\n # override the config file\n click.option(\n '--use-directory-urls/--no-directory-urls',\n is_flag=True,\n default=None,\n help=use_directory_urls_help,\n ),\n)\n\nPYTHON_VERSION = f\"{sys.version_info.major}.{sys.version_info.minor}\"\n\nPKG_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\[email protected](context_settings=dict(help_option_names=['-h', '--help'], max_content_width=120))\[email protected]_option(\n __version__,\n '-V',\n '--version',\n message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })',\n)\n@common_options\ndef cli():\n \"\"\"\n MkDocs - Project documentation with Markdown.\n \"\"\"\n\n\[email protected](name=\"serve\")\[email protected]('-a', '--dev-addr', help=dev_addr_help, metavar='<IP:PORT>')\[email protected]('--livereload', 'livereload', flag_value='livereload', default=True, hidden=True)\[email protected]('--no-livereload', 'livereload', flag_value='no-livereload', help=no_reload_help)\[email protected]('--dirtyreload', 'build_type', flag_value='dirty', hidden=True)\[email protected]('--dirty', 'build_type', flag_value='dirty', help=serve_dirty_help)\[email protected]('-c', '--clean', 'build_type', flag_value='clean', help=serve_clean_help)\[email protected]('--watch-theme', help=watch_theme_help, is_flag=True)\[email protected](\n '-w', '--watch', help=watch_help, type=click.Path(exists=True), multiple=True, default=[]\n)\n@common_config_options\n@common_options\ndef serve_command(**kwargs):\n \"\"\"Run the builtin development server\"\"\"\n from mkdocs.commands import serve\n\n _enable_warnings()\n serve.serve(**kwargs)\n\n\[email protected](name=\"build\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef build_command(clean, **kwargs):\n \"\"\"Build the MkDocs documentation\"\"\"\n from mkdocs.commands import build\n\n _enable_warnings()\n cfg = config.load_config(**kwargs)\n cfg.plugins.on_startup(command='build', dirty=not clean)\n try:\n build.build(cfg, dirty=not clean)\n finally:\n cfg.plugins.on_shutdown()\n\n\[email protected](name=\"gh-deploy\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\[email protected]('-m', '--message', help=commit_message_help)\[email protected]('-b', '--remote-branch', help=remote_branch_help)\[email protected]('-r', '--remote-name', help=remote_name_help)\[email protected]('--force', is_flag=True, help=force_help)\[email protected]('--no-history', is_flag=True, help=no_history_help)\[email protected]('--ignore-version', is_flag=True, help=ignore_version_help)\[email protected]('--shell', is_flag=True, help=shell_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef gh_deploy_command(\n clean, message, remote_branch, remote_name, force, no_history, ignore_version, shell, **kwargs\n):\n \"\"\"Deploy your documentation to GitHub Pages\"\"\"\n from mkdocs.commands import build, gh_deploy\n\n _enable_warnings()\n cfg = config.load_config(remote_branch=remote_branch, remote_name=remote_name, **kwargs)\n cfg.plugins.on_startup(command='gh-deploy', dirty=not clean)\n try:\n build.build(cfg, dirty=not clean)\n finally:\n cfg.plugins.on_shutdown()\n gh_deploy.gh_deploy(\n cfg,\n message=message,\n force=force,\n no_history=no_history,\n ignore_version=ignore_version,\n shell=shell,\n )\n\n\[email protected](name=\"get-deps\")\n@verbose_option\[email protected]('-f', '--config-file', type=click.File('rb'), help=config_help)\[email protected](\n '-p',\n '--projects-file',\n default='https://raw.githubusercontent.com/mkdocs/catalog/main/projects.yaml',\n help=projects_file_help,\n show_default=True,\n)\ndef get_deps_command(config_file, projects_file):\n \"\"\"Show required PyPI packages inferred from plugins in mkdocs.yml\"\"\"\n from mkdocs.commands import get_deps\n\n warning_counter = utils.CountHandler()\n warning_counter.setLevel(logging.WARNING)\n logging.getLogger('mkdocs').addHandler(warning_counter)\n\n get_deps.get_deps(projects_file_url=projects_file, config_file_path=config_file)\n\n if warning_counter.get_counts():\n sys.exit(1)\n\n\[email protected](name=\"new\")\[email protected](\"project_directory\")\n@common_options\ndef new_command(project_directory):\n \"\"\"Create a new MkDocs project\"\"\"\n from mkdocs.commands import new\n\n new.new(project_directory)\n\n\nif __name__ == '__main__': # pragma: no cover\n cli()\n", "path": "mkdocs/__main__.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nimport shutil\nimport sys\nimport textwrap\nimport traceback\nimport warnings\n\nimport click\n\nfrom mkdocs import __version__, config, utils\n\nif sys.platform.startswith(\"win\"):\n try:\n import colorama\n except ImportError:\n pass\n else:\n colorama.init()\n\nlog = logging.getLogger(__name__)\n\n\ndef _showwarning(message, category, filename, lineno, file=None, line=None):\n try:\n # Last stack frames:\n # * ...\n # * Location of call to deprecated function <-- include this\n # * Location of call to warn() <-- include this\n # * (stdlib) Location of call to showwarning function\n # * (this function) Location of call to extract_stack()\n stack = [frame for frame in traceback.extract_stack() if frame.line][-4:-2]\n # Make sure the actual affected file's name is still present (the case of syntax warning):\n if not any(frame.filename == filename for frame in stack):\n stack = stack[-1:] + [traceback.FrameSummary(filename, lineno, '')]\n\n tb = ''.join(traceback.format_list(stack))\n except Exception:\n tb = f' File \"{filename}\", line {lineno}'\n\n log.info(f'{category.__name__}: {message}\\n{tb}')\n\n\ndef _enable_warnings():\n from mkdocs.commands import build\n\n build.log.addFilter(utils.DuplicateFilter())\n\n warnings.simplefilter('module', DeprecationWarning)\n warnings.showwarning = _showwarning\n\n\nclass ColorFormatter(logging.Formatter):\n colors = {\n 'CRITICAL': 'red',\n 'ERROR': 'red',\n 'WARNING': 'yellow',\n 'DEBUG': 'blue',\n }\n\n text_wrapper = textwrap.TextWrapper(\n width=shutil.get_terminal_size(fallback=(0, 0)).columns,\n replace_whitespace=False,\n break_long_words=False,\n break_on_hyphens=False,\n initial_indent=' ' * 11,\n subsequent_indent=' ' * 11,\n )\n\n def format(self, record):\n message = super().format(record)\n prefix = f'{record.levelname:<8}- '\n if record.levelname in self.colors:\n prefix = click.style(prefix, fg=self.colors[record.levelname])\n if self.text_wrapper.width:\n # Only wrap text if a terminal width was detected\n msg = '\\n'.join(self.text_wrapper.fill(line) for line in message.splitlines())\n # Prepend prefix after wrapping so that color codes don't affect length\n return prefix + msg[11:]\n return prefix + message\n\n\nclass State:\n \"\"\"Maintain logging level.\"\"\"\n\n def __init__(self, log_name='mkdocs', level=logging.INFO):\n self.logger = logging.getLogger(log_name)\n # Don't restrict level on logger; use handler\n self.logger.setLevel(1)\n self.logger.propagate = False\n\n self.stream = logging.StreamHandler()\n self.stream.setFormatter(ColorFormatter())\n self.stream.setLevel(level)\n self.stream.name = 'MkDocsStreamHandler'\n self.logger.addHandler(self.stream)\n\n def __del__(self):\n self.logger.removeHandler(self.stream)\n\n\npass_state = click.make_pass_decorator(State, ensure=True)\n\nclean_help = \"Remove old files from the site_dir before building (the default).\"\nconfig_help = (\n \"Provide a specific MkDocs config. This can be a file name, or '-' to read from stdin.\"\n)\ndev_addr_help = \"IP address and port to serve documentation locally (default: localhost:8000)\"\nstrict_help = \"Enable strict mode. This will cause MkDocs to abort the build on any warnings.\"\ntheme_help = \"The theme to use when building your documentation.\"\ntheme_choices = sorted(utils.get_theme_names())\nsite_dir_help = \"The directory to output the result of the documentation build.\"\nuse_directory_urls_help = \"Use directory URLs when building pages (the default).\"\nreload_help = \"Enable the live reloading in the development server (this is the default)\"\nno_reload_help = \"Disable the live reloading in the development server.\"\nserve_dirty_help = \"Only re-build files that have changed.\"\nserve_clean_help = (\n \"Build the site without any effects of `mkdocs serve` - pure `mkdocs build`, then serve.\"\n)\ncommit_message_help = (\n \"A commit message to use when committing to the \"\n \"GitHub Pages remote branch. Commit {sha} and MkDocs {version} are available as expansions\"\n)\nremote_branch_help = (\n \"The remote branch to commit to for GitHub Pages. This \"\n \"overrides the value specified in config\"\n)\nremote_name_help = (\n \"The remote name to commit to for GitHub Pages. This overrides the value specified in config\"\n)\nforce_help = \"Force the push to the repository.\"\nno_history_help = \"Replace the whole Git history with one new commit.\"\nignore_version_help = (\n \"Ignore check that build is not being deployed with an older version of MkDocs.\"\n)\nwatch_theme_help = (\n \"Include the theme in list of files to watch for live reloading. \"\n \"Ignored when live reload is not used.\"\n)\nshell_help = \"Use the shell when invoking Git.\"\nwatch_help = \"A directory or file to watch for live reloading. Can be supplied multiple times.\"\nprojects_file_help = (\n \"URL or local path of the registry file that declares all known MkDocs-related projects.\"\n)\n\n\ndef add_options(*opts):\n def inner(f):\n for i in reversed(opts):\n f = i(f)\n return f\n\n return inner\n\n\ndef verbose_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.DEBUG)\n\n return click.option(\n '-v',\n '--verbose',\n is_flag=True,\n expose_value=False,\n help='Enable verbose output',\n callback=callback,\n )(f)\n\n\ndef quiet_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.ERROR)\n\n return click.option(\n '-q',\n '--quiet',\n is_flag=True,\n expose_value=False,\n help='Silence warnings',\n callback=callback,\n )(f)\n\n\ndef color_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value is False or (\n value is None\n and (\n not sys.stdout.isatty()\n or os.environ.get('NO_COLOR')\n or os.environ.get('TERM') == 'dumb'\n )\n ):\n state.stream.setFormatter(logging.Formatter('%(levelname)-8s- %(message)s'))\n\n return click.option(\n '--color/--no-color',\n is_flag=True,\n default=None,\n expose_value=False,\n help=\"Force enable or disable color and wrapping for the output. Default is auto-detect.\",\n callback=callback,\n )(f)\n\n\ncommon_options = add_options(quiet_option, verbose_option)\ncommon_config_options = add_options(\n click.option('-f', '--config-file', type=click.File('rb'), help=config_help),\n # Don't override config value if user did not specify --strict flag\n # Conveniently, load_config drops None values\n click.option('-s', '--strict/--no-strict', is_flag=True, default=None, help=strict_help),\n click.option('-t', '--theme', type=click.Choice(theme_choices), help=theme_help),\n # As with --strict, set the default to None so that this doesn't incorrectly\n # override the config file\n click.option(\n '--use-directory-urls/--no-directory-urls',\n is_flag=True,\n default=None,\n help=use_directory_urls_help,\n ),\n)\n\nPYTHON_VERSION = f\"{sys.version_info.major}.{sys.version_info.minor}\"\n\nPKG_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\[email protected](context_settings=dict(help_option_names=['-h', '--help'], max_content_width=120))\[email protected]_option(\n __version__,\n '-V',\n '--version',\n message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })',\n)\n@common_options\n@color_option\ndef cli():\n \"\"\"\n MkDocs - Project documentation with Markdown.\n \"\"\"\n\n\[email protected](name=\"serve\")\[email protected]('-a', '--dev-addr', help=dev_addr_help, metavar='<IP:PORT>')\[email protected]('--livereload', 'livereload', flag_value='livereload', default=True, hidden=True)\[email protected]('--no-livereload', 'livereload', flag_value='no-livereload', help=no_reload_help)\[email protected]('--dirtyreload', 'build_type', flag_value='dirty', hidden=True)\[email protected]('--dirty', 'build_type', flag_value='dirty', help=serve_dirty_help)\[email protected]('-c', '--clean', 'build_type', flag_value='clean', help=serve_clean_help)\[email protected]('--watch-theme', help=watch_theme_help, is_flag=True)\[email protected](\n '-w', '--watch', help=watch_help, type=click.Path(exists=True), multiple=True, default=[]\n)\n@common_config_options\n@common_options\ndef serve_command(**kwargs):\n \"\"\"Run the builtin development server\"\"\"\n from mkdocs.commands import serve\n\n _enable_warnings()\n serve.serve(**kwargs)\n\n\[email protected](name=\"build\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef build_command(clean, **kwargs):\n \"\"\"Build the MkDocs documentation\"\"\"\n from mkdocs.commands import build\n\n _enable_warnings()\n cfg = config.load_config(**kwargs)\n cfg.plugins.on_startup(command='build', dirty=not clean)\n try:\n build.build(cfg, dirty=not clean)\n finally:\n cfg.plugins.on_shutdown()\n\n\[email protected](name=\"gh-deploy\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\[email protected]('-m', '--message', help=commit_message_help)\[email protected]('-b', '--remote-branch', help=remote_branch_help)\[email protected]('-r', '--remote-name', help=remote_name_help)\[email protected]('--force', is_flag=True, help=force_help)\[email protected]('--no-history', is_flag=True, help=no_history_help)\[email protected]('--ignore-version', is_flag=True, help=ignore_version_help)\[email protected]('--shell', is_flag=True, help=shell_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef gh_deploy_command(\n clean, message, remote_branch, remote_name, force, no_history, ignore_version, shell, **kwargs\n):\n \"\"\"Deploy your documentation to GitHub Pages\"\"\"\n from mkdocs.commands import build, gh_deploy\n\n _enable_warnings()\n cfg = config.load_config(remote_branch=remote_branch, remote_name=remote_name, **kwargs)\n cfg.plugins.on_startup(command='gh-deploy', dirty=not clean)\n try:\n build.build(cfg, dirty=not clean)\n finally:\n cfg.plugins.on_shutdown()\n gh_deploy.gh_deploy(\n cfg,\n message=message,\n force=force,\n no_history=no_history,\n ignore_version=ignore_version,\n shell=shell,\n )\n\n\[email protected](name=\"get-deps\")\n@verbose_option\[email protected]('-f', '--config-file', type=click.File('rb'), help=config_help)\[email protected](\n '-p',\n '--projects-file',\n default='https://raw.githubusercontent.com/mkdocs/catalog/main/projects.yaml',\n help=projects_file_help,\n show_default=True,\n)\ndef get_deps_command(config_file, projects_file):\n \"\"\"Show required PyPI packages inferred from plugins in mkdocs.yml\"\"\"\n from mkdocs.commands import get_deps\n\n warning_counter = utils.CountHandler()\n warning_counter.setLevel(logging.WARNING)\n logging.getLogger('mkdocs').addHandler(warning_counter)\n\n get_deps.get_deps(projects_file_url=projects_file, config_file_path=config_file)\n\n if warning_counter.get_counts():\n sys.exit(1)\n\n\[email protected](name=\"new\")\[email protected](\"project_directory\")\n@common_options\ndef new_command(project_directory):\n \"\"\"Create a new MkDocs project\"\"\"\n from mkdocs.commands import new\n\n new.new(project_directory)\n\n\nif __name__ == '__main__': # pragma: no cover\n cli()\n", "path": "mkdocs/__main__.py"}]} | 3,849 | 560 |
gh_patches_debug_36217 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1090 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Botomongo - S3 spans throwing an "validation error: span: context: destination: service: 'name' required"
**Flask Python application doesn't record the SPAN with S3 call**:
**To Reproduce**
1. Flask Application doing S3 call
2. In console you will see the exception `Failed to submit message: 'HTTP 400: {"accepted":3,"errors":[{"message":"validation error: span: context: destination: service: \'name\' required",...`
**Environment (please complete the following information)**
- OS: Linux
- Python version: 3.6
- Framework and version: Flask 1.1.2
- APM Server version: v7.12.0
- Agent version: 6.1.0
**From the APM version 7.12 name field is required**
Problem is located here:
elasticapm/instrumentation/packages/botocore.py
`context["destination"]["service"] = {"type": span_type}`
for destination.service there is no destination.service.name element
IMHO: destination.service.name should be set as in the elasticapm/instrumentation/packages/elasticsearch.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/botocore.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from collections import namedtuple
32
33 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
34 from elasticapm.traces import capture_span
35 from elasticapm.utils.compat import urlparse
36
37 HandlerInfo = namedtuple("HandlerInfo", ("signature", "span_type", "span_subtype", "span_action", "context"))
38
39 # Used for boto3 < 1.7
40 endpoint_to_service_id = {"SNS": "SNS", "S3": "S3", "DYNAMODB": "DynamoDB", "SQS": "SQS"}
41
42
43 class BotocoreInstrumentation(AbstractInstrumentedModule):
44 name = "botocore"
45
46 instrument_list = [("botocore.client", "BaseClient._make_api_call")]
47
48 def call(self, module, method, wrapped, instance, args, kwargs):
49 if "operation_name" in kwargs:
50 operation_name = kwargs["operation_name"]
51 else:
52 operation_name = args[0]
53
54 service_model = instance.meta.service_model
55 if hasattr(service_model, "service_id"): # added in boto3 1.7
56 service = service_model.service_id
57 else:
58 service = service_model.service_name.upper()
59 service = endpoint_to_service_id.get(service, service)
60
61 parsed_url = urlparse.urlparse(instance.meta.endpoint_url)
62 context = {
63 "destination": {
64 "address": parsed_url.hostname,
65 "port": parsed_url.port,
66 "cloud": {"region": instance.meta.region_name},
67 }
68 }
69
70 handler_info = None
71 handler = handlers.get(service, False)
72 if handler:
73 handler_info = handler(operation_name, service, instance, args, kwargs, context)
74 if not handler_info:
75 handler_info = handle_default(operation_name, service, instance, args, kwargs, context)
76
77 with capture_span(
78 handler_info.signature,
79 span_type=handler_info.span_type,
80 leaf=True,
81 span_subtype=handler_info.span_subtype,
82 span_action=handler_info.span_action,
83 extra=handler_info.context,
84 ):
85 return wrapped(*args, **kwargs)
86
87
88 def handle_s3(operation_name, service, instance, args, kwargs, context):
89 span_type = "storage"
90 span_subtype = "s3"
91 span_action = operation_name
92 if len(args) > 1 and "Bucket" in args[1]:
93 bucket = args[1]["Bucket"]
94 else:
95 # TODO handle Access Points
96 bucket = ""
97 signature = f"S3 {operation_name} {bucket}"
98
99 context["destination"]["name"] = span_subtype
100 context["destination"]["resource"] = bucket
101 context["destination"]["service"] = {"type": span_type}
102
103 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
104
105
106 def handle_dynamodb(operation_name, service, instance, args, kwargs, context):
107 span_type = "db"
108 span_subtype = "dynamodb"
109 span_action = "query"
110 if len(args) > 1 and "TableName" in args[1]:
111 table = args[1]["TableName"]
112 else:
113 table = ""
114 signature = f"DynamoDB {operation_name} {table}".rstrip()
115
116 context["db"] = {"type": "dynamodb", "instance": instance.meta.region_name}
117 if operation_name == "Query" and len(args) > 1 and "KeyConditionExpression" in args[1]:
118 context["db"]["statement"] = args[1]["KeyConditionExpression"]
119
120 context["destination"]["name"] = span_subtype
121 context["destination"]["resource"] = table
122 context["destination"]["service"] = {"type": span_type}
123 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
124
125
126 def handle_sns(operation_name, service, instance, args, kwargs, context):
127 if operation_name != "Publish":
128 # only "publish" is handled specifically, other endpoints get the default treatment
129 return False
130 span_type = "messaging"
131 span_subtype = "sns"
132 span_action = "send"
133 topic_name = ""
134 if len(args) > 1:
135 if "Name" in args[1]:
136 topic_name = args[1]["Name"]
137 if "TopicArn" in args[1]:
138 topic_name = args[1]["TopicArn"].rsplit(":", maxsplit=1)[-1]
139 signature = f"SNS {operation_name} {topic_name}".rstrip()
140 context["destination"]["name"] = span_subtype
141 context["destination"]["resource"] = f"{span_subtype}/{topic_name}" if topic_name else span_subtype
142 context["destination"]["type"] = span_type
143 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
144
145
146 def handle_sqs(operation_name, service, instance, args, kwargs, destination):
147 pass
148
149
150 def handle_default(operation_name, service, instance, args, kwargs, destination):
151 span_type = "aws"
152 span_subtype = service.lower()
153 span_action = operation_name
154
155 signature = f"{service}:{operation_name}"
156 return HandlerInfo(signature, span_type, span_subtype, span_action, destination)
157
158
159 handlers = {
160 "S3": handle_s3,
161 "DynamoDB": handle_dynamodb,
162 "SNS": handle_sns,
163 "default": handle_default,
164 }
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py
--- a/elasticapm/instrumentation/packages/botocore.py
+++ b/elasticapm/instrumentation/packages/botocore.py
@@ -96,9 +96,7 @@
bucket = ""
signature = f"S3 {operation_name} {bucket}"
- context["destination"]["name"] = span_subtype
- context["destination"]["resource"] = bucket
- context["destination"]["service"] = {"type": span_type}
+ context["destination"]["service"] = {"name": span_subtype, "resource": bucket, "type": span_type}
return HandlerInfo(signature, span_type, span_subtype, span_action, context)
@@ -117,9 +115,7 @@
if operation_name == "Query" and len(args) > 1 and "KeyConditionExpression" in args[1]:
context["db"]["statement"] = args[1]["KeyConditionExpression"]
- context["destination"]["name"] = span_subtype
- context["destination"]["resource"] = table
- context["destination"]["service"] = {"type": span_type}
+ context["destination"]["service"] = {"name": span_subtype, "resource": table, "type": span_type}
return HandlerInfo(signature, span_type, span_subtype, span_action, context)
@@ -137,9 +133,11 @@
if "TopicArn" in args[1]:
topic_name = args[1]["TopicArn"].rsplit(":", maxsplit=1)[-1]
signature = f"SNS {operation_name} {topic_name}".rstrip()
- context["destination"]["name"] = span_subtype
- context["destination"]["resource"] = f"{span_subtype}/{topic_name}" if topic_name else span_subtype
- context["destination"]["type"] = span_type
+ context["destination"]["service"] = {
+ "name": span_subtype,
+ "resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
+ "type": span_type,
+ }
return HandlerInfo(signature, span_type, span_subtype, span_action, context)
@@ -152,6 +150,8 @@
span_subtype = service.lower()
span_action = operation_name
+ destination["service"] = {"name": span_subtype, "resource": span_subtype, "type": span_type}
+
signature = f"{service}:{operation_name}"
return HandlerInfo(signature, span_type, span_subtype, span_action, destination)
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py\n--- a/elasticapm/instrumentation/packages/botocore.py\n+++ b/elasticapm/instrumentation/packages/botocore.py\n@@ -96,9 +96,7 @@\n bucket = \"\"\n signature = f\"S3 {operation_name} {bucket}\"\n \n- context[\"destination\"][\"name\"] = span_subtype\n- context[\"destination\"][\"resource\"] = bucket\n- context[\"destination\"][\"service\"] = {\"type\": span_type}\n+ context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": bucket, \"type\": span_type}\n \n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n \n@@ -117,9 +115,7 @@\n if operation_name == \"Query\" and len(args) > 1 and \"KeyConditionExpression\" in args[1]:\n context[\"db\"][\"statement\"] = args[1][\"KeyConditionExpression\"]\n \n- context[\"destination\"][\"name\"] = span_subtype\n- context[\"destination\"][\"resource\"] = table\n- context[\"destination\"][\"service\"] = {\"type\": span_type}\n+ context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": table, \"type\": span_type}\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n \n \n@@ -137,9 +133,11 @@\n if \"TopicArn\" in args[1]:\n topic_name = args[1][\"TopicArn\"].rsplit(\":\", maxsplit=1)[-1]\n signature = f\"SNS {operation_name} {topic_name}\".rstrip()\n- context[\"destination\"][\"name\"] = span_subtype\n- context[\"destination\"][\"resource\"] = f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype\n- context[\"destination\"][\"type\"] = span_type\n+ context[\"destination\"][\"service\"] = {\n+ \"name\": span_subtype,\n+ \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n+ \"type\": span_type,\n+ }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n \n \n@@ -152,6 +150,8 @@\n span_subtype = service.lower()\n span_action = operation_name\n \n+ destination[\"service\"] = {\"name\": span_subtype, \"resource\": span_subtype, \"type\": span_type}\n+\n signature = f\"{service}:{operation_name}\"\n return HandlerInfo(signature, span_type, span_subtype, span_action, destination)\n", "issue": "Botomongo - S3 spans throwing an \"validation error: span: context: destination: service: 'name' required\"\n**Flask Python application doesn't record the SPAN with S3 call**:\r\n\r\n**To Reproduce**\r\n\r\n1. Flask Application doing S3 call\r\n2. In console you will see the exception `Failed to submit message: 'HTTP 400: {\"accepted\":3,\"errors\":[{\"message\":\"validation error: span: context: destination: service: \\'name\\' required\",...`\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux\r\n- Python version: 3.6\r\n- Framework and version: Flask 1.1.2\r\n- APM Server version: v7.12.0\r\n- Agent version: 6.1.0\r\n\r\n\r\n**From the APM version 7.12 name field is required**\r\n\r\nProblem is located here:\r\nelasticapm/instrumentation/packages/botocore.py\r\n`context[\"destination\"][\"service\"] = {\"type\": span_type}`\r\nfor destination.service there is no destination.service.name element\r\n\r\nIMHO: destination.service.name should be set as in the elasticapm/instrumentation/packages/elasticsearch.py\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom collections import namedtuple\n\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.compat import urlparse\n\nHandlerInfo = namedtuple(\"HandlerInfo\", (\"signature\", \"span_type\", \"span_subtype\", \"span_action\", \"context\"))\n\n# Used for boto3 < 1.7\nendpoint_to_service_id = {\"SNS\": \"SNS\", \"S3\": \"S3\", \"DYNAMODB\": \"DynamoDB\", \"SQS\": \"SQS\"}\n\n\nclass BotocoreInstrumentation(AbstractInstrumentedModule):\n name = \"botocore\"\n\n instrument_list = [(\"botocore.client\", \"BaseClient._make_api_call\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"operation_name\" in kwargs:\n operation_name = kwargs[\"operation_name\"]\n else:\n operation_name = args[0]\n\n service_model = instance.meta.service_model\n if hasattr(service_model, \"service_id\"): # added in boto3 1.7\n service = service_model.service_id\n else:\n service = service_model.service_name.upper()\n service = endpoint_to_service_id.get(service, service)\n\n parsed_url = urlparse.urlparse(instance.meta.endpoint_url)\n context = {\n \"destination\": {\n \"address\": parsed_url.hostname,\n \"port\": parsed_url.port,\n \"cloud\": {\"region\": instance.meta.region_name},\n }\n }\n\n handler_info = None\n handler = handlers.get(service, False)\n if handler:\n handler_info = handler(operation_name, service, instance, args, kwargs, context)\n if not handler_info:\n handler_info = handle_default(operation_name, service, instance, args, kwargs, context)\n\n with capture_span(\n handler_info.signature,\n span_type=handler_info.span_type,\n leaf=True,\n span_subtype=handler_info.span_subtype,\n span_action=handler_info.span_action,\n extra=handler_info.context,\n ):\n return wrapped(*args, **kwargs)\n\n\ndef handle_s3(operation_name, service, instance, args, kwargs, context):\n span_type = \"storage\"\n span_subtype = \"s3\"\n span_action = operation_name\n if len(args) > 1 and \"Bucket\" in args[1]:\n bucket = args[1][\"Bucket\"]\n else:\n # TODO handle Access Points\n bucket = \"\"\n signature = f\"S3 {operation_name} {bucket}\"\n\n context[\"destination\"][\"name\"] = span_subtype\n context[\"destination\"][\"resource\"] = bucket\n context[\"destination\"][\"service\"] = {\"type\": span_type}\n\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_dynamodb(operation_name, service, instance, args, kwargs, context):\n span_type = \"db\"\n span_subtype = \"dynamodb\"\n span_action = \"query\"\n if len(args) > 1 and \"TableName\" in args[1]:\n table = args[1][\"TableName\"]\n else:\n table = \"\"\n signature = f\"DynamoDB {operation_name} {table}\".rstrip()\n\n context[\"db\"] = {\"type\": \"dynamodb\", \"instance\": instance.meta.region_name}\n if operation_name == \"Query\" and len(args) > 1 and \"KeyConditionExpression\" in args[1]:\n context[\"db\"][\"statement\"] = args[1][\"KeyConditionExpression\"]\n\n context[\"destination\"][\"name\"] = span_subtype\n context[\"destination\"][\"resource\"] = table\n context[\"destination\"][\"service\"] = {\"type\": span_type}\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sns(operation_name, service, instance, args, kwargs, context):\n if operation_name != \"Publish\":\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sns\"\n span_action = \"send\"\n topic_name = \"\"\n if len(args) > 1:\n if \"Name\" in args[1]:\n topic_name = args[1][\"Name\"]\n if \"TopicArn\" in args[1]:\n topic_name = args[1][\"TopicArn\"].rsplit(\":\", maxsplit=1)[-1]\n signature = f\"SNS {operation_name} {topic_name}\".rstrip()\n context[\"destination\"][\"name\"] = span_subtype\n context[\"destination\"][\"resource\"] = f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype\n context[\"destination\"][\"type\"] = span_type\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sqs(operation_name, service, instance, args, kwargs, destination):\n pass\n\n\ndef handle_default(operation_name, service, instance, args, kwargs, destination):\n span_type = \"aws\"\n span_subtype = service.lower()\n span_action = operation_name\n\n signature = f\"{service}:{operation_name}\"\n return HandlerInfo(signature, span_type, span_subtype, span_action, destination)\n\n\nhandlers = {\n \"S3\": handle_s3,\n \"DynamoDB\": handle_dynamodb,\n \"SNS\": handle_sns,\n \"default\": handle_default,\n}\n", "path": "elasticapm/instrumentation/packages/botocore.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom collections import namedtuple\n\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.compat import urlparse\n\nHandlerInfo = namedtuple(\"HandlerInfo\", (\"signature\", \"span_type\", \"span_subtype\", \"span_action\", \"context\"))\n\n# Used for boto3 < 1.7\nendpoint_to_service_id = {\"SNS\": \"SNS\", \"S3\": \"S3\", \"DYNAMODB\": \"DynamoDB\", \"SQS\": \"SQS\"}\n\n\nclass BotocoreInstrumentation(AbstractInstrumentedModule):\n name = \"botocore\"\n\n instrument_list = [(\"botocore.client\", \"BaseClient._make_api_call\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"operation_name\" in kwargs:\n operation_name = kwargs[\"operation_name\"]\n else:\n operation_name = args[0]\n\n service_model = instance.meta.service_model\n if hasattr(service_model, \"service_id\"): # added in boto3 1.7\n service = service_model.service_id\n else:\n service = service_model.service_name.upper()\n service = endpoint_to_service_id.get(service, service)\n\n parsed_url = urlparse.urlparse(instance.meta.endpoint_url)\n context = {\n \"destination\": {\n \"address\": parsed_url.hostname,\n \"port\": parsed_url.port,\n \"cloud\": {\"region\": instance.meta.region_name},\n }\n }\n\n handler_info = None\n handler = handlers.get(service, False)\n if handler:\n handler_info = handler(operation_name, service, instance, args, kwargs, context)\n if not handler_info:\n handler_info = handle_default(operation_name, service, instance, args, kwargs, context)\n\n with capture_span(\n handler_info.signature,\n span_type=handler_info.span_type,\n leaf=True,\n span_subtype=handler_info.span_subtype,\n span_action=handler_info.span_action,\n extra=handler_info.context,\n ):\n return wrapped(*args, **kwargs)\n\n\ndef handle_s3(operation_name, service, instance, args, kwargs, context):\n span_type = \"storage\"\n span_subtype = \"s3\"\n span_action = operation_name\n if len(args) > 1 and \"Bucket\" in args[1]:\n bucket = args[1][\"Bucket\"]\n else:\n # TODO handle Access Points\n bucket = \"\"\n signature = f\"S3 {operation_name} {bucket}\"\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": bucket, \"type\": span_type}\n\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_dynamodb(operation_name, service, instance, args, kwargs, context):\n span_type = \"db\"\n span_subtype = \"dynamodb\"\n span_action = \"query\"\n if len(args) > 1 and \"TableName\" in args[1]:\n table = args[1][\"TableName\"]\n else:\n table = \"\"\n signature = f\"DynamoDB {operation_name} {table}\".rstrip()\n\n context[\"db\"] = {\"type\": \"dynamodb\", \"instance\": instance.meta.region_name}\n if operation_name == \"Query\" and len(args) > 1 and \"KeyConditionExpression\" in args[1]:\n context[\"db\"][\"statement\"] = args[1][\"KeyConditionExpression\"]\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": table, \"type\": span_type}\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sns(operation_name, service, instance, args, kwargs, context):\n if operation_name != \"Publish\":\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sns\"\n span_action = \"send\"\n topic_name = \"\"\n if len(args) > 1:\n if \"Name\" in args[1]:\n topic_name = args[1][\"Name\"]\n if \"TopicArn\" in args[1]:\n topic_name = args[1][\"TopicArn\"].rsplit(\":\", maxsplit=1)[-1]\n signature = f\"SNS {operation_name} {topic_name}\".rstrip()\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sqs(operation_name, service, instance, args, kwargs, destination):\n pass\n\n\ndef handle_default(operation_name, service, instance, args, kwargs, destination):\n span_type = \"aws\"\n span_subtype = service.lower()\n span_action = operation_name\n\n destination[\"service\"] = {\"name\": span_subtype, \"resource\": span_subtype, \"type\": span_type}\n\n signature = f\"{service}:{operation_name}\"\n return HandlerInfo(signature, span_type, span_subtype, span_action, destination)\n\n\nhandlers = {\n \"S3\": handle_s3,\n \"DynamoDB\": handle_dynamodb,\n \"SNS\": handle_sns,\n \"default\": handle_default,\n}\n", "path": "elasticapm/instrumentation/packages/botocore.py"}]} | 2,391 | 576 |
gh_patches_debug_2122 | rasdani/github-patches | git_diff | docker__docker-py-3099 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to use docker to run containers started from Jupyter Notebook
Hello,
I'm currently following this tutorial: https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-pipelines/tabular/local-mode/sagemaker-pipelines-local-mode.ipynb
I'm getting the following error from trying to run it (it uses docker in the background to run contaiiners). It executes everything when the following command is run:
``` python
execution = pipeline.start()
```
I get the following error:
```python
Creating q0r36pja78-algo-1-ywafn ...
Creating q0r36pja78-algo-1-ywafn ... done
Attaching to q0r36pja78-algo-1-ywafn
Traceback (most recent call last):
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\Scripts\docker-compose.exe\__main__.py", line 7, in <module>
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\main.py", line 81, in main
command_func()
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\main.py", line 203, in perform_command
handler(command, command_options)
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\metrics\decorator.py", line 18, in wrapper
result = fn(*args, **kwargs)
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\main.py", line 1216, in up
cascade_starter = log_printer.run()
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\log_printer.py", line 88, in run
for line in consume_queue(queue, self.cascade_stop):
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\log_printer.py", line 250, in consume_queue
raise item.exc
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\cli\log_printer.py", line 162, in tail_container_logs
for item in build_log_generator(container, log_args):
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\utils.py", line 50, in split_buffer
for data in stream_as_text(stream):
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\compose\utils.py", line 26, in stream_as_text
for data in stream:
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\docker\types\daemon.py", line 32, in __next__
return next(self._stream)
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\docker\api\client.py", line 418, in <genexpr>
gen = (data for (_, data) in gen)
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\docker\utils\socket.py", line 95, in <genexpr>
return ((STDOUT, frame) for frame in frames_iter_tty(socket))
File "C:\Users\franc\anaconda3\envs\sm-pipelines-modelbuild\lib\site-packages\docker\utils\socket.py", line 128, in frames_iter_tty
if len(result) == 0:
TypeError: object of type 'int' has no len()
Pipeline step 'AbaloneProcess' FAILED. Failure message is: RuntimeError: Failed to run: ['docker-compose', '-f', 'C:\\Users\\franc\\AppData\\Local\\Temp\\tmpga4umz96\\docker-compose.yaml', 'up', '--build', '--abort-on-container-exit']
Pipeline execution c0a11456-aec5-48ec-adde-4ee45085efa8 FAILED because step 'AbaloneProcess' failed.
```
Version of the modules:
Python 3.8.16
docker 6.0.1
docker-compose 1.29.2
docker desktop 4.16.3
Thanks
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/socket.py`
Content:
```
1 import errno
2 import os
3 import select
4 import socket as pysocket
5 import struct
6
7 try:
8 from ..transport import NpipeSocket
9 except ImportError:
10 NpipeSocket = type(None)
11
12
13 STDOUT = 1
14 STDERR = 2
15
16
17 class SocketError(Exception):
18 pass
19
20
21 # NpipeSockets have their own error types
22 # pywintypes.error: (109, 'ReadFile', 'The pipe has been ended.')
23 NPIPE_ENDED = 109
24
25
26 def read(socket, n=4096):
27 """
28 Reads at most n bytes from socket
29 """
30
31 recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)
32
33 if not isinstance(socket, NpipeSocket):
34 select.select([socket], [], [])
35
36 try:
37 if hasattr(socket, 'recv'):
38 return socket.recv(n)
39 if isinstance(socket, getattr(pysocket, 'SocketIO')):
40 return socket.read(n)
41 return os.read(socket.fileno(), n)
42 except OSError as e:
43 if e.errno not in recoverable_errors:
44 raise
45 except Exception as e:
46 is_pipe_ended = (isinstance(socket, NpipeSocket) and
47 len(e.args) > 0 and
48 e.args[0] == NPIPE_ENDED)
49 if is_pipe_ended:
50 # npipes don't support duplex sockets, so we interpret
51 # a PIPE_ENDED error as a close operation (0-length read).
52 return 0
53 raise
54
55
56 def read_exactly(socket, n):
57 """
58 Reads exactly n bytes from socket
59 Raises SocketError if there isn't enough data
60 """
61 data = bytes()
62 while len(data) < n:
63 next_data = read(socket, n - len(data))
64 if not next_data:
65 raise SocketError("Unexpected EOF")
66 data += next_data
67 return data
68
69
70 def next_frame_header(socket):
71 """
72 Returns the stream and size of the next frame of data waiting to be read
73 from socket, according to the protocol defined here:
74
75 https://docs.docker.com/engine/api/v1.24/#attach-to-a-container
76 """
77 try:
78 data = read_exactly(socket, 8)
79 except SocketError:
80 return (-1, -1)
81
82 stream, actual = struct.unpack('>BxxxL', data)
83 return (stream, actual)
84
85
86 def frames_iter(socket, tty):
87 """
88 Return a generator of frames read from socket. A frame is a tuple where
89 the first item is the stream number and the second item is a chunk of data.
90
91 If the tty setting is enabled, the streams are multiplexed into the stdout
92 stream.
93 """
94 if tty:
95 return ((STDOUT, frame) for frame in frames_iter_tty(socket))
96 else:
97 return frames_iter_no_tty(socket)
98
99
100 def frames_iter_no_tty(socket):
101 """
102 Returns a generator of data read from the socket when the tty setting is
103 not enabled.
104 """
105 while True:
106 (stream, n) = next_frame_header(socket)
107 if n < 0:
108 break
109 while n > 0:
110 result = read(socket, n)
111 if result is None:
112 continue
113 data_length = len(result)
114 if data_length == 0:
115 # We have reached EOF
116 return
117 n -= data_length
118 yield (stream, result)
119
120
121 def frames_iter_tty(socket):
122 """
123 Return a generator of data read from the socket when the tty setting is
124 enabled.
125 """
126 while True:
127 result = read(socket)
128 if len(result) == 0:
129 # We have reached EOF
130 return
131 yield result
132
133
134 def consume_socket_output(frames, demux=False):
135 """
136 Iterate through frames read from the socket and return the result.
137
138 Args:
139
140 demux (bool):
141 If False, stdout and stderr are multiplexed, and the result is the
142 concatenation of all the frames. If True, the streams are
143 demultiplexed, and the result is a 2-tuple where each item is the
144 concatenation of frames belonging to the same stream.
145 """
146 if demux is False:
147 # If the streams are multiplexed, the generator returns strings, that
148 # we just need to concatenate.
149 return bytes().join(frames)
150
151 # If the streams are demultiplexed, the generator yields tuples
152 # (stdout, stderr)
153 out = [None, None]
154 for frame in frames:
155 # It is guaranteed that for each frame, one and only one stream
156 # is not None.
157 assert frame != (None, None)
158 if frame[0] is not None:
159 if out[0] is None:
160 out[0] = frame[0]
161 else:
162 out[0] += frame[0]
163 else:
164 if out[1] is None:
165 out[1] = frame[1]
166 else:
167 out[1] += frame[1]
168 return tuple(out)
169
170
171 def demux_adaptor(stream_id, data):
172 """
173 Utility to demultiplex stdout and stderr when reading frames from the
174 socket.
175 """
176 if stream_id == STDOUT:
177 return (data, None)
178 elif stream_id == STDERR:
179 return (None, data)
180 else:
181 raise ValueError(f'{stream_id} is not a valid stream')
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/utils/socket.py b/docker/utils/socket.py
--- a/docker/utils/socket.py
+++ b/docker/utils/socket.py
@@ -49,7 +49,7 @@
if is_pipe_ended:
# npipes don't support duplex sockets, so we interpret
# a PIPE_ENDED error as a close operation (0-length read).
- return 0
+ return ''
raise
| {"golden_diff": "diff --git a/docker/utils/socket.py b/docker/utils/socket.py\n--- a/docker/utils/socket.py\n+++ b/docker/utils/socket.py\n@@ -49,7 +49,7 @@\n if is_pipe_ended:\n # npipes don't support duplex sockets, so we interpret\n # a PIPE_ENDED error as a close operation (0-length read).\n- return 0\n+ return ''\n raise\n", "issue": "Unable to use docker to run containers started from Jupyter Notebook\nHello,\r\n\r\nI'm currently following this tutorial: https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-pipelines/tabular/local-mode/sagemaker-pipelines-local-mode.ipynb\r\n\r\nI'm getting the following error from trying to run it (it uses docker in the background to run contaiiners). It executes everything when the following command is run: \r\n\r\n``` python\r\nexecution = pipeline.start()\r\n\r\n```\r\n\r\nI get the following error:\r\n\r\n```python\r\nCreating q0r36pja78-algo-1-ywafn ... \r\nCreating q0r36pja78-algo-1-ywafn ... done\r\nAttaching to q0r36pja78-algo-1-ywafn\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\Scripts\\docker-compose.exe\\__main__.py\", line 7, in <module>\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\main.py\", line 81, in main\r\n command_func()\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\main.py\", line 203, in perform_command\r\n handler(command, command_options)\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\metrics\\decorator.py\", line 18, in wrapper\r\n result = fn(*args, **kwargs)\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\main.py\", line 1216, in up\r\n cascade_starter = log_printer.run()\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\log_printer.py\", line 88, in run\r\n for line in consume_queue(queue, self.cascade_stop):\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\log_printer.py\", line 250, in consume_queue\r\n raise item.exc\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\cli\\log_printer.py\", line 162, in tail_container_logs\r\n for item in build_log_generator(container, log_args):\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\utils.py\", line 50, in split_buffer\r\n for data in stream_as_text(stream):\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\compose\\utils.py\", line 26, in stream_as_text\r\n for data in stream:\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\docker\\types\\daemon.py\", line 32, in __next__\r\n return next(self._stream)\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\docker\\api\\client.py\", line 418, in <genexpr>\r\n gen = (data for (_, data) in gen)\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\docker\\utils\\socket.py\", line 95, in <genexpr>\r\n return ((STDOUT, frame) for frame in frames_iter_tty(socket))\r\n File \"C:\\Users\\franc\\anaconda3\\envs\\sm-pipelines-modelbuild\\lib\\site-packages\\docker\\utils\\socket.py\", line 128, in frames_iter_tty\r\n if len(result) == 0:\r\nTypeError: object of type 'int' has no len()\r\nPipeline step 'AbaloneProcess' FAILED. Failure message is: RuntimeError: Failed to run: ['docker-compose', '-f', 'C:\\\\Users\\\\franc\\\\AppData\\\\Local\\\\Temp\\\\tmpga4umz96\\\\docker-compose.yaml', 'up', '--build', '--abort-on-container-exit']\r\nPipeline execution c0a11456-aec5-48ec-adde-4ee45085efa8 FAILED because step 'AbaloneProcess' failed.\r\n\r\n```\r\n\r\nVersion of the modules:\r\nPython 3.8.16\r\ndocker 6.0.1\r\ndocker-compose 1.29.2\r\ndocker desktop 4.16.3\r\n\r\nThanks\r\n\n", "before_files": [{"content": "import errno\nimport os\nimport select\nimport socket as pysocket\nimport struct\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nSTDOUT = 1\nSTDERR = 2\n\n\nclass SocketError(Exception):\n pass\n\n\n# NpipeSockets have their own error types\n# pywintypes.error: (109, 'ReadFile', 'The pipe has been ended.')\nNPIPE_ENDED = 109\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n \"\"\"\n\n recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)\n\n if not isinstance(socket, NpipeSocket):\n select.select([socket], [], [])\n\n try:\n if hasattr(socket, 'recv'):\n return socket.recv(n)\n if isinstance(socket, getattr(pysocket, 'SocketIO')):\n return socket.read(n)\n return os.read(socket.fileno(), n)\n except OSError as e:\n if e.errno not in recoverable_errors:\n raise\n except Exception as e:\n is_pipe_ended = (isinstance(socket, NpipeSocket) and\n len(e.args) > 0 and\n e.args[0] == NPIPE_ENDED)\n if is_pipe_ended:\n # npipes don't support duplex sockets, so we interpret\n # a PIPE_ENDED error as a close operation (0-length read).\n return 0\n raise\n\n\ndef read_exactly(socket, n):\n \"\"\"\n Reads exactly n bytes from socket\n Raises SocketError if there isn't enough data\n \"\"\"\n data = bytes()\n while len(data) < n:\n next_data = read(socket, n - len(data))\n if not next_data:\n raise SocketError(\"Unexpected EOF\")\n data += next_data\n return data\n\n\ndef next_frame_header(socket):\n \"\"\"\n Returns the stream and size of the next frame of data waiting to be read\n from socket, according to the protocol defined here:\n\n https://docs.docker.com/engine/api/v1.24/#attach-to-a-container\n \"\"\"\n try:\n data = read_exactly(socket, 8)\n except SocketError:\n return (-1, -1)\n\n stream, actual = struct.unpack('>BxxxL', data)\n return (stream, actual)\n\n\ndef frames_iter(socket, tty):\n \"\"\"\n Return a generator of frames read from socket. A frame is a tuple where\n the first item is the stream number and the second item is a chunk of data.\n\n If the tty setting is enabled, the streams are multiplexed into the stdout\n stream.\n \"\"\"\n if tty:\n return ((STDOUT, frame) for frame in frames_iter_tty(socket))\n else:\n return frames_iter_no_tty(socket)\n\n\ndef frames_iter_no_tty(socket):\n \"\"\"\n Returns a generator of data read from the socket when the tty setting is\n not enabled.\n \"\"\"\n while True:\n (stream, n) = next_frame_header(socket)\n if n < 0:\n break\n while n > 0:\n result = read(socket, n)\n if result is None:\n continue\n data_length = len(result)\n if data_length == 0:\n # We have reached EOF\n return\n n -= data_length\n yield (stream, result)\n\n\ndef frames_iter_tty(socket):\n \"\"\"\n Return a generator of data read from the socket when the tty setting is\n enabled.\n \"\"\"\n while True:\n result = read(socket)\n if len(result) == 0:\n # We have reached EOF\n return\n yield result\n\n\ndef consume_socket_output(frames, demux=False):\n \"\"\"\n Iterate through frames read from the socket and return the result.\n\n Args:\n\n demux (bool):\n If False, stdout and stderr are multiplexed, and the result is the\n concatenation of all the frames. If True, the streams are\n demultiplexed, and the result is a 2-tuple where each item is the\n concatenation of frames belonging to the same stream.\n \"\"\"\n if demux is False:\n # If the streams are multiplexed, the generator returns strings, that\n # we just need to concatenate.\n return bytes().join(frames)\n\n # If the streams are demultiplexed, the generator yields tuples\n # (stdout, stderr)\n out = [None, None]\n for frame in frames:\n # It is guaranteed that for each frame, one and only one stream\n # is not None.\n assert frame != (None, None)\n if frame[0] is not None:\n if out[0] is None:\n out[0] = frame[0]\n else:\n out[0] += frame[0]\n else:\n if out[1] is None:\n out[1] = frame[1]\n else:\n out[1] += frame[1]\n return tuple(out)\n\n\ndef demux_adaptor(stream_id, data):\n \"\"\"\n Utility to demultiplex stdout and stderr when reading frames from the\n socket.\n \"\"\"\n if stream_id == STDOUT:\n return (data, None)\n elif stream_id == STDERR:\n return (None, data)\n else:\n raise ValueError(f'{stream_id} is not a valid stream')\n", "path": "docker/utils/socket.py"}], "after_files": [{"content": "import errno\nimport os\nimport select\nimport socket as pysocket\nimport struct\n\ntry:\n from ..transport import NpipeSocket\nexcept ImportError:\n NpipeSocket = type(None)\n\n\nSTDOUT = 1\nSTDERR = 2\n\n\nclass SocketError(Exception):\n pass\n\n\n# NpipeSockets have their own error types\n# pywintypes.error: (109, 'ReadFile', 'The pipe has been ended.')\nNPIPE_ENDED = 109\n\n\ndef read(socket, n=4096):\n \"\"\"\n Reads at most n bytes from socket\n \"\"\"\n\n recoverable_errors = (errno.EINTR, errno.EDEADLK, errno.EWOULDBLOCK)\n\n if not isinstance(socket, NpipeSocket):\n select.select([socket], [], [])\n\n try:\n if hasattr(socket, 'recv'):\n return socket.recv(n)\n if isinstance(socket, getattr(pysocket, 'SocketIO')):\n return socket.read(n)\n return os.read(socket.fileno(), n)\n except OSError as e:\n if e.errno not in recoverable_errors:\n raise\n except Exception as e:\n is_pipe_ended = (isinstance(socket, NpipeSocket) and\n len(e.args) > 0 and\n e.args[0] == NPIPE_ENDED)\n if is_pipe_ended:\n # npipes don't support duplex sockets, so we interpret\n # a PIPE_ENDED error as a close operation (0-length read).\n return ''\n raise\n\n\ndef read_exactly(socket, n):\n \"\"\"\n Reads exactly n bytes from socket\n Raises SocketError if there isn't enough data\n \"\"\"\n data = bytes()\n while len(data) < n:\n next_data = read(socket, n - len(data))\n if not next_data:\n raise SocketError(\"Unexpected EOF\")\n data += next_data\n return data\n\n\ndef next_frame_header(socket):\n \"\"\"\n Returns the stream and size of the next frame of data waiting to be read\n from socket, according to the protocol defined here:\n\n https://docs.docker.com/engine/api/v1.24/#attach-to-a-container\n \"\"\"\n try:\n data = read_exactly(socket, 8)\n except SocketError:\n return (-1, -1)\n\n stream, actual = struct.unpack('>BxxxL', data)\n return (stream, actual)\n\n\ndef frames_iter(socket, tty):\n \"\"\"\n Return a generator of frames read from socket. A frame is a tuple where\n the first item is the stream number and the second item is a chunk of data.\n\n If the tty setting is enabled, the streams are multiplexed into the stdout\n stream.\n \"\"\"\n if tty:\n return ((STDOUT, frame) for frame in frames_iter_tty(socket))\n else:\n return frames_iter_no_tty(socket)\n\n\ndef frames_iter_no_tty(socket):\n \"\"\"\n Returns a generator of data read from the socket when the tty setting is\n not enabled.\n \"\"\"\n while True:\n (stream, n) = next_frame_header(socket)\n if n < 0:\n break\n while n > 0:\n result = read(socket, n)\n if result is None:\n continue\n data_length = len(result)\n if data_length == 0:\n # We have reached EOF\n return\n n -= data_length\n yield (stream, result)\n\n\ndef frames_iter_tty(socket):\n \"\"\"\n Return a generator of data read from the socket when the tty setting is\n enabled.\n \"\"\"\n while True:\n result = read(socket)\n if len(result) == 0:\n # We have reached EOF\n return\n yield result\n\n\ndef consume_socket_output(frames, demux=False):\n \"\"\"\n Iterate through frames read from the socket and return the result.\n\n Args:\n\n demux (bool):\n If False, stdout and stderr are multiplexed, and the result is the\n concatenation of all the frames. If True, the streams are\n demultiplexed, and the result is a 2-tuple where each item is the\n concatenation of frames belonging to the same stream.\n \"\"\"\n if demux is False:\n # If the streams are multiplexed, the generator returns strings, that\n # we just need to concatenate.\n return bytes().join(frames)\n\n # If the streams are demultiplexed, the generator yields tuples\n # (stdout, stderr)\n out = [None, None]\n for frame in frames:\n # It is guaranteed that for each frame, one and only one stream\n # is not None.\n assert frame != (None, None)\n if frame[0] is not None:\n if out[0] is None:\n out[0] = frame[0]\n else:\n out[0] += frame[0]\n else:\n if out[1] is None:\n out[1] = frame[1]\n else:\n out[1] += frame[1]\n return tuple(out)\n\n\ndef demux_adaptor(stream_id, data):\n \"\"\"\n Utility to demultiplex stdout and stderr when reading frames from the\n socket.\n \"\"\"\n if stream_id == STDOUT:\n return (data, None)\n elif stream_id == STDERR:\n return (None, data)\n else:\n raise ValueError(f'{stream_id} is not a valid stream')\n", "path": "docker/utils/socket.py"}]} | 3,071 | 90 |
gh_patches_debug_10458 | rasdani/github-patches | git_diff | interlegis__sapl-2070 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Erro ao imprimir etiquetas (campo descrição estourando)
Nome da Cidade: | Birigui
Estado: | São Paulo
Casa Legislativa: | Câmara Municipal
Bom dia,
Começamos hoje a utilizar a versão 3.1 do SAPL que foi migrado ontem. Mas estamos com um problema na impressão das etiquetas do protocolo administrativo. O problema está na última linha da etiqueta, ela imprimi o Assunto ao invés do Tipo/Número do documento vinculado, gerando uma etiqueta enorme.
No aguardo,
Evandro.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py`
Content:
```
1 # parameters=sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro
2
3 """relatorio_protocolo.py
4 External method para gerar o arquivo rml da etiqueta de protocolo
5 Autor: Luciano De Fazio
6 Empresa: OpenLegis Consultoria
7 versão: 1.0
8 """
9 import time
10
11 from trml2pdf import parseString
12
13
14 def cabecalho(dic_cabecalho, imagem):
15 """Gera o codigo rml do cabecalho"""
16 tmp_data = ''
17 tmp_data += '\t\t\t\t<image x="2.1cm" y="25.7cm" width="59" height="62" file="' + \
18 imagem + '"/>\n'
19 tmp_data += '\t\t\t\t<lines>2cm 25.4cm 19cm 25.4cm</lines>\n'
20 tmp_data += '\t\t\t\t<setFont name="Helvetica-Bold" size="15"/>\n'
21 tmp_data += '\t\t\t\t<drawString x="5cm" y="27.2cm">' + \
22 dic_cabecalho['nom_casa'] + '</drawString>\n'
23 tmp_data += '\t\t\t\t<setFont name="Helvetica" size="12"/>\n'
24 tmp_data += '\t\t\t\t<drawString x="5cm" y="26.6cm">Sistema de Apoio ao Processo Legislativo</drawString>\n'
25 tmp_data += '\t\t\t\t<setFont name="Helvetica-Bold" size="13"/>\n'
26 tmp_data += '\t\t\t\t<drawString x="2.2cm" y="24.6cm">Relatório de Controle do Protocolo</drawString>\n'
27
28 return tmp_data
29
30
31 def rodape(lst_rodape):
32 """Gera o codigo rml do rodape"""
33
34 tmp_data = ''
35 tmp_data += '\t\t\t\t<lines>2cm 3.2cm 19cm 3.2cm</lines>\n'
36 tmp_data += '\t\t\t\t<setFont name="Helvetica" size="8"/>\n'
37 tmp_data += '\t\t\t\t<drawString x="2cm" y="3.3cm">' + \
38 lst_rodape[2] + '</drawString>\n'
39 tmp_data += '\t\t\t\t<drawString x="17.9cm" y="3.3cm">Página <pageNumber/></drawString>\n'
40 tmp_data += '\t\t\t\t<drawCentredString x="10.5cm" y="2.7cm">' + \
41 lst_rodape[0] + '</drawCentredString>\n'
42 tmp_data += '\t\t\t\t<drawCentredString x="10.5cm" y="2.3cm">' + \
43 lst_rodape[1] + '</drawCentredString>\n'
44
45 return tmp_data
46
47
48 def paraStyle():
49 """Gera o codigo rml que define o estilo dos paragrafos"""
50
51 tmp_data = ''
52 tmp_data += '\t<stylesheet>\n'
53 tmp_data += '\t\t<blockTableStyle id="Standard_Outline">\n'
54 tmp_data += '\t\t\t<blockAlignment value="CENTER"/>\n'
55 tmp_data += '\t\t\t<blockValign value="TOP"/>\n'
56 tmp_data += '\t\t</blockTableStyle>\n'
57 tmp_data += '\t\t<initialize>\n'
58 tmp_data += '\t\t\t<paraStyle name="all" alignment="justify"/>\n'
59 tmp_data += '\t\t</initialize>\n'
60 tmp_data += '\t\t<paraStyle name="P1" fontName="Helvetica-Bold" fontSize="5.0" leading="6" alignment="CENTER"/>\n'
61 tmp_data += '\t\t<paraStyle name="P2" fontName="Helvetica" fontSize="8.0" leading="9" alignment="CENTER"/>\n'
62 tmp_data += '\t</stylesheet>\n'
63
64 return tmp_data
65
66
67 def protocolos(lst_protocolos, dic_cabecalho):
68 """Gera o codigo rml do conteudo da pesquisa de protocolos"""
69
70 tmp_data = ''
71
72 # inicio do bloco que contem os flowables
73 tmp_data += '\t<story>\n'
74
75 for dic in lst_protocolos:
76 # condicao para a quebra de pagina
77 tmp_data += '\t\t<condPageBreak height="8mm"/>\n'
78
79 # protocolos
80 if dic['titulo'] != None:
81 tmp_data += '\t\t<para style="P1">\n'
82 tmp_data += '\t\t\t<font color="white"> </font>\n'
83 tmp_data += '\t\t</para>\n'
84 tmp_data += '\t\t<para style="P2"><b>' + \
85 dic_cabecalho['nom_casa'] + '</b></para>\n'
86 tmp_data += '\t\t<para style="P2">\n'
87 tmp_data += '\t\t\t<font color="white"> </font>\n'
88 tmp_data += '\t\t</para>\n'
89 tmp_data += '<blockTable style="Standard_Outline"><tr><td>'
90 tmp_data += '<barCode code="Code128" x="0.15cm" barHeight="0.34in" barWidth="0.018in">' + \
91 dic['titulo'] + '</barCode>\n'
92 tmp_data += '</td></tr></blockTable>'
93 tmp_data += '\t\t<para style="P2"><b>PROTOCOLO GERAL ' + \
94 dic['titulo'] + '</b></para>\n'
95 if dic['data'] != None:
96 tmp_data += '\t\t<para style="P2"><b>' + \
97 dic['data'] + '</b></para>\n'
98 tmp_data += '\t\t<para style="P2"><b>' + \
99 dic['natureza']
100 if dic['ident_processo']:
101 tmp_data += ' - ' + dic['ident_processo'] + '</b></para>\n'
102 else:
103 tmp_data += '</b></para>\n'
104
105 tmp_data += '\t</story>\n'
106 return tmp_data
107
108
109 def principal(imagem, lst_protocolos, dic_cabecalho, lst_rodape):
110 """Funcao pricipal que gera a estrutura global do arquivo rml"""
111
112 arquivoPdf = str(int(time.time() * 100)) + ".pdf"
113
114 tmp_data = ''
115 tmp_data += '<?xml version="1.0" encoding="utf-8" standalone="no" ?>\n'
116 tmp_data += '<!DOCTYPE document SYSTEM "rml_1_0.dtd">\n'
117 tmp_data += '<document filename="etiquetas.pdf">\n'
118 tmp_data += '\t<template pageSize="(62mm, 29mm)" title="Etiquetas de Protocolo" author="Luciano De Fazio" allowSplitting="20">\n'
119 tmp_data += '\t\t<pageTemplate id="first">\n'
120 tmp_data += '\t\t\t<pageGraphics>\n'
121 tmp_data += '\t\t\t<frame id="first" x1="0.03cm" y1="0.1cm" width="61mm" height="29mm"/>\n'
122 tmp_data += '\t\t\t</pageGraphics>\n'
123 tmp_data += '\t\t</pageTemplate>\n'
124 tmp_data += '\t</template>\n'
125 tmp_data += paraStyle()
126 tmp_data += protocolos(lst_protocolos, dic_cabecalho)
127 tmp_data += '</document>\n'
128 tmp_pdf = parseString(tmp_data)
129
130 return tmp_pdf
131 # if hasattr(context.temp_folder,arquivoPdf):
132 # context.temp_folder.manage_delObjects(ids=arquivoPdf)
133 # context.temp_folder.manage_addFile(arquivoPdf)
134 # arq=context.temp_folder[arquivoPdf]
135 # arq.manage_edit(title='Arquivo PDF temporário.',filedata=tmp_pdf,content_type='application/pdf')
136
137 # return "/temp_folder/"+arquivoPdf
138
139 # return
140 # principal(sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro)
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py b/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py
--- a/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py
+++ b/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py
@@ -98,7 +98,11 @@
tmp_data += '\t\t<para style="P2"><b>' + \
dic['natureza']
if dic['ident_processo']:
- tmp_data += ' - ' + dic['ident_processo'] + '</b></para>\n'
+ # Limita o tamanho do texto para não "explodir" as etiquetas
+ descricao = dic['ident_processo'][:60]
+ if len(dic['ident_processo']) > 60:
+ descricao += '...'
+ tmp_data += ' - ' + descricao + '</b></para>\n'
else:
tmp_data += '</b></para>\n'
| {"golden_diff": "diff --git a/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py b/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py\n--- a/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py\n+++ b/sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py\n@@ -98,7 +98,11 @@\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic['natureza']\n if dic['ident_processo']:\n- tmp_data += ' - ' + dic['ident_processo'] + '</b></para>\\n'\n+ # Limita o tamanho do texto para n\u00e3o \"explodir\" as etiquetas\n+ descricao = dic['ident_processo'][:60]\n+ if len(dic['ident_processo']) > 60:\n+ descricao += '...'\n+ tmp_data += ' - ' + descricao + '</b></para>\\n'\n else:\n tmp_data += '</b></para>\\n'\n", "issue": "Erro ao imprimir etiquetas (campo descri\u00e7\u00e3o estourando)\nNome da Cidade: | Birigui\r\nEstado: | S\u00e3o Paulo\r\nCasa Legislativa: | C\u00e2mara Municipal\r\n\r\nBom dia,\r\nCome\u00e7amos hoje a utilizar a vers\u00e3o 3.1 do SAPL que foi migrado ontem. Mas estamos com um problema na impress\u00e3o das etiquetas do protocolo administrativo. O problema est\u00e1 na \u00faltima linha da etiqueta, ela imprimi o Assunto ao inv\u00e9s do Tipo/N\u00famero do documento vinculado, gerando uma etiqueta enorme. \r\nNo aguardo,\r\nEvandro.\n", "before_files": [{"content": "# parameters=sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro\n\n\"\"\"relatorio_protocolo.py\n External method para gerar o arquivo rml da etiqueta de protocolo\n Autor: Luciano De Fazio\n Empresa: OpenLegis Consultoria\n vers\u00c3\u00a3o: 1.0\n\"\"\"\nimport time\n\nfrom trml2pdf import parseString\n\n\ndef cabecalho(dic_cabecalho, imagem):\n \"\"\"Gera o codigo rml do cabecalho\"\"\"\n tmp_data = ''\n tmp_data += '\\t\\t\\t\\t<image x=\"2.1cm\" y=\"25.7cm\" width=\"59\" height=\"62\" file=\"' + \\\n imagem + '\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<lines>2cm 25.4cm 19cm 25.4cm</lines>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica-Bold\" size=\"15\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"5cm\" y=\"27.2cm\">' + \\\n dic_cabecalho['nom_casa'] + '</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica\" size=\"12\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"5cm\" y=\"26.6cm\">Sistema de Apoio ao Processo Legislativo</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica-Bold\" size=\"13\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"2.2cm\" y=\"24.6cm\">Relat\u00c3\u00b3rio de Controle do Protocolo</drawString>\\n'\n\n return tmp_data\n\n\ndef rodape(lst_rodape):\n \"\"\"Gera o codigo rml do rodape\"\"\"\n\n tmp_data = ''\n tmp_data += '\\t\\t\\t\\t<lines>2cm 3.2cm 19cm 3.2cm</lines>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica\" size=\"8\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"2cm\" y=\"3.3cm\">' + \\\n lst_rodape[2] + '</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"17.9cm\" y=\"3.3cm\">P\u00c3\u00a1gina <pageNumber/></drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawCentredString x=\"10.5cm\" y=\"2.7cm\">' + \\\n lst_rodape[0] + '</drawCentredString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawCentredString x=\"10.5cm\" y=\"2.3cm\">' + \\\n lst_rodape[1] + '</drawCentredString>\\n'\n\n return tmp_data\n\n\ndef paraStyle():\n \"\"\"Gera o codigo rml que define o estilo dos paragrafos\"\"\"\n\n tmp_data = ''\n tmp_data += '\\t<stylesheet>\\n'\n tmp_data += '\\t\\t<blockTableStyle id=\"Standard_Outline\">\\n'\n tmp_data += '\\t\\t\\t<blockAlignment value=\"CENTER\"/>\\n'\n tmp_data += '\\t\\t\\t<blockValign value=\"TOP\"/>\\n'\n tmp_data += '\\t\\t</blockTableStyle>\\n'\n tmp_data += '\\t\\t<initialize>\\n'\n tmp_data += '\\t\\t\\t<paraStyle name=\"all\" alignment=\"justify\"/>\\n'\n tmp_data += '\\t\\t</initialize>\\n'\n tmp_data += '\\t\\t<paraStyle name=\"P1\" fontName=\"Helvetica-Bold\" fontSize=\"5.0\" leading=\"6\" alignment=\"CENTER\"/>\\n'\n tmp_data += '\\t\\t<paraStyle name=\"P2\" fontName=\"Helvetica\" fontSize=\"8.0\" leading=\"9\" alignment=\"CENTER\"/>\\n'\n tmp_data += '\\t</stylesheet>\\n'\n\n return tmp_data\n\n\ndef protocolos(lst_protocolos, dic_cabecalho):\n \"\"\"Gera o codigo rml do conteudo da pesquisa de protocolos\"\"\"\n\n tmp_data = ''\n\n # inicio do bloco que contem os flowables\n tmp_data += '\\t<story>\\n'\n\n for dic in lst_protocolos:\n # condicao para a quebra de pagina\n tmp_data += '\\t\\t<condPageBreak height=\"8mm\"/>\\n'\n\n # protocolos\n if dic['titulo'] != None:\n tmp_data += '\\t\\t<para style=\"P1\">\\n'\n tmp_data += '\\t\\t\\t<font color=\"white\"> </font>\\n'\n tmp_data += '\\t\\t</para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic_cabecalho['nom_casa'] + '</b></para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\">\\n'\n tmp_data += '\\t\\t\\t<font color=\"white\"> </font>\\n'\n tmp_data += '\\t\\t</para>\\n'\n tmp_data += '<blockTable style=\"Standard_Outline\"><tr><td>'\n tmp_data += '<barCode code=\"Code128\" x=\"0.15cm\" barHeight=\"0.34in\" barWidth=\"0.018in\">' + \\\n dic['titulo'] + '</barCode>\\n'\n tmp_data += '</td></tr></blockTable>'\n tmp_data += '\\t\\t<para style=\"P2\"><b>PROTOCOLO GERAL ' + \\\n dic['titulo'] + '</b></para>\\n'\n if dic['data'] != None:\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic['data'] + '</b></para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic['natureza']\n if dic['ident_processo']:\n tmp_data += ' - ' + dic['ident_processo'] + '</b></para>\\n'\n else:\n tmp_data += '</b></para>\\n'\n\n tmp_data += '\\t</story>\\n'\n return tmp_data\n\n\ndef principal(imagem, lst_protocolos, dic_cabecalho, lst_rodape):\n \"\"\"Funcao pricipal que gera a estrutura global do arquivo rml\"\"\"\n\n arquivoPdf = str(int(time.time() * 100)) + \".pdf\"\n\n tmp_data = ''\n tmp_data += '<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\" ?>\\n'\n tmp_data += '<!DOCTYPE document SYSTEM \"rml_1_0.dtd\">\\n'\n tmp_data += '<document filename=\"etiquetas.pdf\">\\n'\n tmp_data += '\\t<template pageSize=\"(62mm, 29mm)\" title=\"Etiquetas de Protocolo\" author=\"Luciano De Fazio\" allowSplitting=\"20\">\\n'\n tmp_data += '\\t\\t<pageTemplate id=\"first\">\\n'\n tmp_data += '\\t\\t\\t<pageGraphics>\\n'\n tmp_data += '\\t\\t\\t<frame id=\"first\" x1=\"0.03cm\" y1=\"0.1cm\" width=\"61mm\" height=\"29mm\"/>\\n'\n tmp_data += '\\t\\t\\t</pageGraphics>\\n'\n tmp_data += '\\t\\t</pageTemplate>\\n'\n tmp_data += '\\t</template>\\n'\n tmp_data += paraStyle()\n tmp_data += protocolos(lst_protocolos, dic_cabecalho)\n tmp_data += '</document>\\n'\n tmp_pdf = parseString(tmp_data)\n\n return tmp_pdf\n# if hasattr(context.temp_folder,arquivoPdf):\n# context.temp_folder.manage_delObjects(ids=arquivoPdf)\n# context.temp_folder.manage_addFile(arquivoPdf)\n# arq=context.temp_folder[arquivoPdf]\n# arq.manage_edit(title='Arquivo PDF tempor\u00c3\u00a1rio.',filedata=tmp_pdf,content_type='application/pdf')\n\n# return \"/temp_folder/\"+arquivoPdf\n\n# return\n# principal(sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro)\n", "path": "sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py"}], "after_files": [{"content": "# parameters=sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro\n\n\"\"\"relatorio_protocolo.py\n External method para gerar o arquivo rml da etiqueta de protocolo\n Autor: Luciano De Fazio\n Empresa: OpenLegis Consultoria\n vers\u00c3\u00a3o: 1.0\n\"\"\"\nimport time\n\nfrom trml2pdf import parseString\n\n\ndef cabecalho(dic_cabecalho, imagem):\n \"\"\"Gera o codigo rml do cabecalho\"\"\"\n tmp_data = ''\n tmp_data += '\\t\\t\\t\\t<image x=\"2.1cm\" y=\"25.7cm\" width=\"59\" height=\"62\" file=\"' + \\\n imagem + '\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<lines>2cm 25.4cm 19cm 25.4cm</lines>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica-Bold\" size=\"15\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"5cm\" y=\"27.2cm\">' + \\\n dic_cabecalho['nom_casa'] + '</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica\" size=\"12\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"5cm\" y=\"26.6cm\">Sistema de Apoio ao Processo Legislativo</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica-Bold\" size=\"13\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"2.2cm\" y=\"24.6cm\">Relat\u00c3\u00b3rio de Controle do Protocolo</drawString>\\n'\n\n return tmp_data\n\n\ndef rodape(lst_rodape):\n \"\"\"Gera o codigo rml do rodape\"\"\"\n\n tmp_data = ''\n tmp_data += '\\t\\t\\t\\t<lines>2cm 3.2cm 19cm 3.2cm</lines>\\n'\n tmp_data += '\\t\\t\\t\\t<setFont name=\"Helvetica\" size=\"8\"/>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"2cm\" y=\"3.3cm\">' + \\\n lst_rodape[2] + '</drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawString x=\"17.9cm\" y=\"3.3cm\">P\u00c3\u00a1gina <pageNumber/></drawString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawCentredString x=\"10.5cm\" y=\"2.7cm\">' + \\\n lst_rodape[0] + '</drawCentredString>\\n'\n tmp_data += '\\t\\t\\t\\t<drawCentredString x=\"10.5cm\" y=\"2.3cm\">' + \\\n lst_rodape[1] + '</drawCentredString>\\n'\n\n return tmp_data\n\n\ndef paraStyle():\n \"\"\"Gera o codigo rml que define o estilo dos paragrafos\"\"\"\n\n tmp_data = ''\n tmp_data += '\\t<stylesheet>\\n'\n tmp_data += '\\t\\t<blockTableStyle id=\"Standard_Outline\">\\n'\n tmp_data += '\\t\\t\\t<blockAlignment value=\"CENTER\"/>\\n'\n tmp_data += '\\t\\t\\t<blockValign value=\"TOP\"/>\\n'\n tmp_data += '\\t\\t</blockTableStyle>\\n'\n tmp_data += '\\t\\t<initialize>\\n'\n tmp_data += '\\t\\t\\t<paraStyle name=\"all\" alignment=\"justify\"/>\\n'\n tmp_data += '\\t\\t</initialize>\\n'\n tmp_data += '\\t\\t<paraStyle name=\"P1\" fontName=\"Helvetica-Bold\" fontSize=\"5.0\" leading=\"6\" alignment=\"CENTER\"/>\\n'\n tmp_data += '\\t\\t<paraStyle name=\"P2\" fontName=\"Helvetica\" fontSize=\"8.0\" leading=\"9\" alignment=\"CENTER\"/>\\n'\n tmp_data += '\\t</stylesheet>\\n'\n\n return tmp_data\n\n\ndef protocolos(lst_protocolos, dic_cabecalho):\n \"\"\"Gera o codigo rml do conteudo da pesquisa de protocolos\"\"\"\n\n tmp_data = ''\n\n # inicio do bloco que contem os flowables\n tmp_data += '\\t<story>\\n'\n\n for dic in lst_protocolos:\n # condicao para a quebra de pagina\n tmp_data += '\\t\\t<condPageBreak height=\"8mm\"/>\\n'\n\n # protocolos\n if dic['titulo'] != None:\n tmp_data += '\\t\\t<para style=\"P1\">\\n'\n tmp_data += '\\t\\t\\t<font color=\"white\"> </font>\\n'\n tmp_data += '\\t\\t</para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic_cabecalho['nom_casa'] + '</b></para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\">\\n'\n tmp_data += '\\t\\t\\t<font color=\"white\"> </font>\\n'\n tmp_data += '\\t\\t</para>\\n'\n tmp_data += '<blockTable style=\"Standard_Outline\"><tr><td>'\n tmp_data += '<barCode code=\"Code128\" x=\"0.15cm\" barHeight=\"0.34in\" barWidth=\"0.018in\">' + \\\n dic['titulo'] + '</barCode>\\n'\n tmp_data += '</td></tr></blockTable>'\n tmp_data += '\\t\\t<para style=\"P2\"><b>PROTOCOLO GERAL ' + \\\n dic['titulo'] + '</b></para>\\n'\n if dic['data'] != None:\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic['data'] + '</b></para>\\n'\n tmp_data += '\\t\\t<para style=\"P2\"><b>' + \\\n dic['natureza']\n if dic['ident_processo']:\n # Limita o tamanho do texto para n\u00e3o \"explodir\" as etiquetas\n descricao = dic['ident_processo'][:60]\n if len(dic['ident_processo']) > 60:\n descricao += '...'\n tmp_data += ' - ' + descricao + '</b></para>\\n'\n else:\n tmp_data += '</b></para>\\n'\n\n tmp_data += '\\t</story>\\n'\n return tmp_data\n\n\ndef principal(imagem, lst_protocolos, dic_cabecalho, lst_rodape):\n \"\"\"Funcao pricipal que gera a estrutura global do arquivo rml\"\"\"\n\n arquivoPdf = str(int(time.time() * 100)) + \".pdf\"\n\n tmp_data = ''\n tmp_data += '<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\" ?>\\n'\n tmp_data += '<!DOCTYPE document SYSTEM \"rml_1_0.dtd\">\\n'\n tmp_data += '<document filename=\"etiquetas.pdf\">\\n'\n tmp_data += '\\t<template pageSize=\"(62mm, 29mm)\" title=\"Etiquetas de Protocolo\" author=\"Luciano De Fazio\" allowSplitting=\"20\">\\n'\n tmp_data += '\\t\\t<pageTemplate id=\"first\">\\n'\n tmp_data += '\\t\\t\\t<pageGraphics>\\n'\n tmp_data += '\\t\\t\\t<frame id=\"first\" x1=\"0.03cm\" y1=\"0.1cm\" width=\"61mm\" height=\"29mm\"/>\\n'\n tmp_data += '\\t\\t\\t</pageGraphics>\\n'\n tmp_data += '\\t\\t</pageTemplate>\\n'\n tmp_data += '\\t</template>\\n'\n tmp_data += paraStyle()\n tmp_data += protocolos(lst_protocolos, dic_cabecalho)\n tmp_data += '</document>\\n'\n tmp_pdf = parseString(tmp_data)\n\n return tmp_pdf\n# if hasattr(context.temp_folder,arquivoPdf):\n# context.temp_folder.manage_delObjects(ids=arquivoPdf)\n# context.temp_folder.manage_addFile(arquivoPdf)\n# arq=context.temp_folder[arquivoPdf]\n# arq.manage_edit(title='Arquivo PDF tempor\u00c3\u00a1rio.',filedata=tmp_pdf,content_type='application/pdf')\n\n# return \"/temp_folder/\"+arquivoPdf\n\n# return\n# principal(sessao,imagem,data,lst_protocolos,dic_cabecalho,lst_rodape,dic_filtro)\n", "path": "sapl/relatorios/templates/pdf_etiqueta_protocolo_gerar.py"}]} | 2,596 | 232 |
gh_patches_debug_61039 | rasdani/github-patches | git_diff | google-research__text-to-text-transfer-transformer-327 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue Running T5 in colab TPU
Hi Team,
I was trying to do a pre training of T5 from scratch on colab. I could see if i install t5 using (pip install t5[gcp]), and tried to connect to execute ` tf.tpu.experimental.initialize_tpu_system(tpu)`, getting below error.
`InvalidArgumentError: NodeDef expected inputs 'string' do not match 0 inputs specified; Op<name=_Send; signature=tensor:T -> ; attr=T:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>; NodeDef: {{node _Send}}`
If install/ upgrade tensorflow, it gets resolved, however import of t5 does not work as below.
`
import t5`
`NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernelRegistrar12InitInternalEPKNS_9KernelDefEN4absl11string_viewESt10unique_ptrINS0_15OpKernelFactoryESt14default_deleteIS8_EE`
Please let me know how if there is a way to resolve this.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2020 The T5 Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Install T5."""
16
17 import os
18 import sys
19 import setuptools
20
21 # To enable importing version.py directly, we add its path to sys.path.
22 version_path = os.path.join(os.path.dirname(__file__), 't5')
23 sys.path.append(version_path)
24 from version import __version__ # pylint: disable=g-import-not-at-top
25
26 # Get the long description from the README file.
27 with open('README.md') as fp:
28 _LONG_DESCRIPTION = fp.read()
29
30 setuptools.setup(
31 name='t5',
32 version=__version__,
33 description='Text-to-text transfer transformer',
34 long_description=_LONG_DESCRIPTION,
35 long_description_content_type='text/markdown',
36 author='Google Inc.',
37 author_email='[email protected]',
38 url='http://github.com/google-research/text-to-text-transfer-transformer',
39 license='Apache 2.0',
40 packages=setuptools.find_packages(),
41 package_data={
42 '': ['*.gin'],
43 },
44 scripts=[],
45 install_requires=[
46 'absl-py',
47 'babel',
48 'gin-config',
49 'mesh-tensorflow[transformer]>=0.1.13',
50 'nltk',
51 'numpy',
52 'pandas',
53 'rouge-score',
54 'sacrebleu',
55 'scikit-learn',
56 'scipy',
57 'sentencepiece',
58 'six>=1.14', # TODO(adarob): Remove once rouge-score is updated.
59 'tensorflow-text<2.3', # TODO(adarob): Unpin once #320 is resolved.
60 'tfds-nightly',
61 'torch',
62 'transformers>=2.7.0',
63 ],
64 extras_require={
65 'gcp': ['gevent', 'google-api-python-client', 'google-compute-engine',
66 'google-cloud-storage', 'oauth2client'],
67 'cache-tasks': ['apache-beam'],
68 'test': ['pytest'],
69 },
70 entry_points={
71 'console_scripts': [
72 't5_mesh_transformer = t5.models.mesh_transformer_main:console_entry_point',
73 't5_cache_tasks = t5.data.cache_tasks_main:console_entry_point'
74 ],
75 },
76 classifiers=[
77 'Development Status :: 4 - Beta',
78 'Intended Audience :: Developers',
79 'Intended Audience :: Science/Research',
80 'License :: OSI Approved :: Apache Software License',
81 'Topic :: Scientific/Engineering :: Artificial Intelligence',
82 ],
83 keywords='text nlp machinelearning',
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -56,7 +56,7 @@
'scipy',
'sentencepiece',
'six>=1.14', # TODO(adarob): Remove once rouge-score is updated.
- 'tensorflow-text<2.3', # TODO(adarob): Unpin once #320 is resolved.
+ 'tensorflow-text',
'tfds-nightly',
'torch',
'transformers>=2.7.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -56,7 +56,7 @@\n 'scipy',\n 'sentencepiece',\n 'six>=1.14', # TODO(adarob): Remove once rouge-score is updated.\n- 'tensorflow-text<2.3', # TODO(adarob): Unpin once #320 is resolved.\n+ 'tensorflow-text',\n 'tfds-nightly',\n 'torch',\n 'transformers>=2.7.0',\n", "issue": "Issue Running T5 in colab TPU\nHi Team,\r\n\r\nI was trying to do a pre training of T5 from scratch on colab. I could see if i install t5 using (pip install t5[gcp]), and tried to connect to execute ` tf.tpu.experimental.initialize_tpu_system(tpu)`, getting below error.\r\n\r\n`InvalidArgumentError: NodeDef expected inputs 'string' do not match 0 inputs specified; Op<name=_Send; signature=tensor:T -> ; attr=T:type; attr=tensor_name:string; attr=send_device:string; attr=send_device_incarnation:int; attr=recv_device:string; attr=client_terminated:bool,default=false; is_stateful=true>; NodeDef: {{node _Send}}`\r\n\r\nIf install/ upgrade tensorflow, it gets resolved, however import of t5 does not work as below.\r\n`\r\nimport t5`\r\n\r\n`NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow14kernel_factory17OpKernelRegistrar12InitInternalEPKNS_9KernelDefEN4absl11string_viewESt10unique_ptrINS0_15OpKernelFactoryESt14default_deleteIS8_EE`\r\n\r\nPlease let me know how if there is a way to resolve this.\r\nThanks.\r\n\n", "before_files": [{"content": "# Copyright 2020 The T5 Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Install T5.\"\"\"\n\nimport os\nimport sys\nimport setuptools\n\n# To enable importing version.py directly, we add its path to sys.path.\nversion_path = os.path.join(os.path.dirname(__file__), 't5')\nsys.path.append(version_path)\nfrom version import __version__ # pylint: disable=g-import-not-at-top\n\n# Get the long description from the README file.\nwith open('README.md') as fp:\n _LONG_DESCRIPTION = fp.read()\n\nsetuptools.setup(\n name='t5',\n version=__version__,\n description='Text-to-text transfer transformer',\n long_description=_LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n author='Google Inc.',\n author_email='[email protected]',\n url='http://github.com/google-research/text-to-text-transfer-transformer',\n license='Apache 2.0',\n packages=setuptools.find_packages(),\n package_data={\n '': ['*.gin'],\n },\n scripts=[],\n install_requires=[\n 'absl-py',\n 'babel',\n 'gin-config',\n 'mesh-tensorflow[transformer]>=0.1.13',\n 'nltk',\n 'numpy',\n 'pandas',\n 'rouge-score',\n 'sacrebleu',\n 'scikit-learn',\n 'scipy',\n 'sentencepiece',\n 'six>=1.14', # TODO(adarob): Remove once rouge-score is updated.\n 'tensorflow-text<2.3', # TODO(adarob): Unpin once #320 is resolved.\n 'tfds-nightly',\n 'torch',\n 'transformers>=2.7.0',\n ],\n extras_require={\n 'gcp': ['gevent', 'google-api-python-client', 'google-compute-engine',\n 'google-cloud-storage', 'oauth2client'],\n 'cache-tasks': ['apache-beam'],\n 'test': ['pytest'],\n },\n entry_points={\n 'console_scripts': [\n 't5_mesh_transformer = t5.models.mesh_transformer_main:console_entry_point',\n 't5_cache_tasks = t5.data.cache_tasks_main:console_entry_point'\n ],\n },\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords='text nlp machinelearning',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2020 The T5 Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Install T5.\"\"\"\n\nimport os\nimport sys\nimport setuptools\n\n# To enable importing version.py directly, we add its path to sys.path.\nversion_path = os.path.join(os.path.dirname(__file__), 't5')\nsys.path.append(version_path)\nfrom version import __version__ # pylint: disable=g-import-not-at-top\n\n# Get the long description from the README file.\nwith open('README.md') as fp:\n _LONG_DESCRIPTION = fp.read()\n\nsetuptools.setup(\n name='t5',\n version=__version__,\n description='Text-to-text transfer transformer',\n long_description=_LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n author='Google Inc.',\n author_email='[email protected]',\n url='http://github.com/google-research/text-to-text-transfer-transformer',\n license='Apache 2.0',\n packages=setuptools.find_packages(),\n package_data={\n '': ['*.gin'],\n },\n scripts=[],\n install_requires=[\n 'absl-py',\n 'babel',\n 'gin-config',\n 'mesh-tensorflow[transformer]>=0.1.13',\n 'nltk',\n 'numpy',\n 'pandas',\n 'rouge-score',\n 'sacrebleu',\n 'scikit-learn',\n 'scipy',\n 'sentencepiece',\n 'six>=1.14', # TODO(adarob): Remove once rouge-score is updated.\n 'tensorflow-text',\n 'tfds-nightly',\n 'torch',\n 'transformers>=2.7.0',\n ],\n extras_require={\n 'gcp': ['gevent', 'google-api-python-client', 'google-compute-engine',\n 'google-cloud-storage', 'oauth2client'],\n 'cache-tasks': ['apache-beam'],\n 'test': ['pytest'],\n },\n entry_points={\n 'console_scripts': [\n 't5_mesh_transformer = t5.models.mesh_transformer_main:console_entry_point',\n 't5_cache_tasks = t5.data.cache_tasks_main:console_entry_point'\n ],\n },\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n keywords='text nlp machinelearning',\n)\n", "path": "setup.py"}]} | 1,376 | 120 |
gh_patches_debug_12711 | rasdani/github-patches | git_diff | conda__conda-6221 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add flag to build environment.yml without build strings
https://gitter.im/conda/conda?at=59ef54ebe44c43700a70e9a4
https://twitter.com/drvinceknight/status/922837449092542464?ref_src=twsrc%5Etfw
> Due to hashes of packages being introduced in `envinronment.yml` I'm getting all sorts of issues with building envs from file. (Very new problem)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_env/env.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 import os
4 from collections import OrderedDict
5 from conda.base.context import context
6 from conda.cli import common # TODO: this should never have to import form conda.cli
7 from conda.core.linked_data import linked
8 from copy import copy
9 from itertools import chain
10
11 from . import compat, exceptions, yaml
12 from .pip_util import add_pip_installed
13
14 def load_from_directory(directory):
15 """Load and return an ``Environment`` from a given ``directory``"""
16 files = ['environment.yml', 'environment.yaml']
17 while True:
18 for f in files:
19 try:
20 return from_file(os.path.join(directory, f))
21 except exceptions.EnvironmentFileNotFound:
22 pass
23 old_directory = directory
24 directory = os.path.dirname(directory)
25 if directory == old_directory:
26 break
27 raise exceptions.EnvironmentFileNotFound(files[0])
28
29
30 # TODO This should lean more on conda instead of divining it from the outside
31 # TODO tests!!!
32 def from_environment(name, prefix, no_builds=False, ignore_channels=False):
33 """
34 Get environment object from prefix
35 Args:
36 name: The name of environment
37 prefix: The path of prefix
38 no_builds: Whether has build requirement
39 ignore_channels: whether ignore_channels
40
41 Returns: Environment object
42 """
43 installed = linked(prefix, ignore_channels=ignore_channels)
44 conda_pkgs = copy(installed)
45 # json=True hides the output, data is added to installed
46 add_pip_installed(prefix, installed, json=True)
47
48 pip_pkgs = sorted(installed - conda_pkgs)
49
50 if no_builds:
51 dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]
52 else:
53 dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]
54 if len(pip_pkgs) > 0:
55 dependencies.append({'pip': ['=='.join(a.rsplit('-', 2)[:2]) for a in pip_pkgs]})
56 # conda uses ruamel_yaml which returns a ruamel_yaml.comments.CommentedSeq
57 # this doesn't dump correctly using pyyaml
58 channels = list(context.channels)
59 if not ignore_channels:
60 for dist in conda_pkgs:
61 if dist.channel not in channels:
62 channels.insert(0, dist.channel)
63 return Environment(name=name, dependencies=dependencies, channels=channels, prefix=prefix)
64
65
66 def from_yaml(yamlstr, **kwargs):
67 """Load and return a ``Environment`` from a given ``yaml string``"""
68 data = yaml.load(yamlstr)
69 if kwargs is not None:
70 for key, value in kwargs.items():
71 data[key] = value
72 return Environment(**data)
73
74
75 def from_file(filename):
76 if not os.path.exists(filename):
77 raise exceptions.EnvironmentFileNotFound(filename)
78 with open(filename, 'r') as fp:
79 yamlstr = fp.read()
80 return from_yaml(yamlstr, filename=filename)
81
82
83 # TODO test explicitly
84 class Dependencies(OrderedDict):
85 def __init__(self, raw, *args, **kwargs):
86 super(Dependencies, self).__init__(*args, **kwargs)
87 self.raw = raw
88 self.parse()
89
90 def parse(self):
91 if not self.raw:
92 return
93
94 self.update({'conda': []})
95
96 for line in self.raw:
97 if isinstance(line, dict):
98 self.update(line)
99 else:
100 self['conda'].append(common.arg2spec(line))
101
102 # TODO only append when it's not already present
103 def add(self, package_name):
104 self.raw.append(package_name)
105 self.parse()
106
107
108 def unique(seq, key=None):
109 """ Return only unique elements of a sequence
110 >>> tuple(unique((1, 2, 3)))
111 (1, 2, 3)
112 >>> tuple(unique((1, 2, 1, 3)))
113 (1, 2, 3)
114 Uniqueness can be defined by key keyword
115 >>> tuple(unique(['cat', 'mouse', 'dog', 'hen'], key=len))
116 ('cat', 'mouse')
117 """
118 seen = set()
119 seen_add = seen.add
120 if key is None:
121 for item in seq:
122 if item not in seen:
123 seen_add(item)
124 yield item
125 else: # calculate key
126 for item in seq:
127 val = key(item)
128 if val not in seen:
129 seen_add(val)
130 yield item
131
132
133 class Environment(object):
134 def __init__(self, name=None, filename=None, channels=None,
135 dependencies=None, prefix=None):
136 self.name = name
137 self.filename = filename
138 self.prefix = prefix
139 self.dependencies = Dependencies(dependencies)
140
141 if channels is None:
142 channels = []
143 self.channels = channels
144
145 def add_channels(self, channels):
146 self.channels = list(unique(chain.from_iterable((channels, self.channels))))
147
148 def remove_channels(self):
149 self.channels = []
150
151 def to_dict(self):
152 d = yaml.dict([('name', self.name)])
153 if self.channels:
154 d['channels'] = self.channels
155 if self.dependencies:
156 d['dependencies'] = self.dependencies.raw
157 if self.prefix:
158 d['prefix'] = self.prefix
159 return d
160
161 def to_yaml(self, stream=None):
162 d = self.to_dict()
163 out = compat.u(yaml.dump(d, default_flow_style=False))
164 if stream is None:
165 return out
166 stream.write(compat.b(out, encoding="utf-8"))
167
168 def save(self):
169 with open(self.filename, "wb") as fp:
170 self.to_yaml(stream=fp)
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_env/env.py b/conda_env/env.py
--- a/conda_env/env.py
+++ b/conda_env/env.py
@@ -48,9 +48,9 @@
pip_pkgs = sorted(installed - conda_pkgs)
if no_builds:
- dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]
+ dependencies = ['='.join((a.name, a.version)) for a in sorted(conda_pkgs)]
else:
- dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]
+ dependencies = ['='.join((a.name, a.version, a.build)) for a in sorted(conda_pkgs)]
if len(pip_pkgs) > 0:
dependencies.append({'pip': ['=='.join(a.rsplit('-', 2)[:2]) for a in pip_pkgs]})
# conda uses ruamel_yaml which returns a ruamel_yaml.comments.CommentedSeq
| {"golden_diff": "diff --git a/conda_env/env.py b/conda_env/env.py\n--- a/conda_env/env.py\n+++ b/conda_env/env.py\n@@ -48,9 +48,9 @@\n pip_pkgs = sorted(installed - conda_pkgs)\n \n if no_builds:\n- dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]\n+ dependencies = ['='.join((a.name, a.version)) for a in sorted(conda_pkgs)]\n else:\n- dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]\n+ dependencies = ['='.join((a.name, a.version, a.build)) for a in sorted(conda_pkgs)]\n if len(pip_pkgs) > 0:\n dependencies.append({'pip': ['=='.join(a.rsplit('-', 2)[:2]) for a in pip_pkgs]})\n # conda uses ruamel_yaml which returns a ruamel_yaml.comments.CommentedSeq\n", "issue": "Add flag to build environment.yml without build strings\nhttps://gitter.im/conda/conda?at=59ef54ebe44c43700a70e9a4\r\nhttps://twitter.com/drvinceknight/status/922837449092542464?ref_src=twsrc%5Etfw\r\n\r\n> Due to hashes of packages being introduced in `envinronment.yml` I'm getting all sorts of issues with building envs from file. (Very new problem)\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport os\nfrom collections import OrderedDict\nfrom conda.base.context import context\nfrom conda.cli import common # TODO: this should never have to import form conda.cli\nfrom conda.core.linked_data import linked\nfrom copy import copy\nfrom itertools import chain\n\nfrom . import compat, exceptions, yaml\nfrom .pip_util import add_pip_installed\n\ndef load_from_directory(directory):\n \"\"\"Load and return an ``Environment`` from a given ``directory``\"\"\"\n files = ['environment.yml', 'environment.yaml']\n while True:\n for f in files:\n try:\n return from_file(os.path.join(directory, f))\n except exceptions.EnvironmentFileNotFound:\n pass\n old_directory = directory\n directory = os.path.dirname(directory)\n if directory == old_directory:\n break\n raise exceptions.EnvironmentFileNotFound(files[0])\n\n\n# TODO This should lean more on conda instead of divining it from the outside\n# TODO tests!!!\ndef from_environment(name, prefix, no_builds=False, ignore_channels=False):\n \"\"\"\n Get environment object from prefix\n Args:\n name: The name of environment\n prefix: The path of prefix\n no_builds: Whether has build requirement\n ignore_channels: whether ignore_channels\n\n Returns: Environment object\n \"\"\"\n installed = linked(prefix, ignore_channels=ignore_channels)\n conda_pkgs = copy(installed)\n # json=True hides the output, data is added to installed\n add_pip_installed(prefix, installed, json=True)\n\n pip_pkgs = sorted(installed - conda_pkgs)\n\n if no_builds:\n dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]\n else:\n dependencies = ['='.join(a.quad[0:3]) for a in sorted(conda_pkgs)]\n if len(pip_pkgs) > 0:\n dependencies.append({'pip': ['=='.join(a.rsplit('-', 2)[:2]) for a in pip_pkgs]})\n # conda uses ruamel_yaml which returns a ruamel_yaml.comments.CommentedSeq\n # this doesn't dump correctly using pyyaml\n channels = list(context.channels)\n if not ignore_channels:\n for dist in conda_pkgs:\n if dist.channel not in channels:\n channels.insert(0, dist.channel)\n return Environment(name=name, dependencies=dependencies, channels=channels, prefix=prefix)\n\n\ndef from_yaml(yamlstr, **kwargs):\n \"\"\"Load and return a ``Environment`` from a given ``yaml string``\"\"\"\n data = yaml.load(yamlstr)\n if kwargs is not None:\n for key, value in kwargs.items():\n data[key] = value\n return Environment(**data)\n\n\ndef from_file(filename):\n if not os.path.exists(filename):\n raise exceptions.EnvironmentFileNotFound(filename)\n with open(filename, 'r') as fp:\n yamlstr = fp.read()\n return from_yaml(yamlstr, filename=filename)\n\n\n# TODO test explicitly\nclass Dependencies(OrderedDict):\n def __init__(self, raw, *args, **kwargs):\n super(Dependencies, self).__init__(*args, **kwargs)\n self.raw = raw\n self.parse()\n\n def parse(self):\n if not self.raw:\n return\n\n self.update({'conda': []})\n\n for line in self.raw:\n if isinstance(line, dict):\n self.update(line)\n else:\n self['conda'].append(common.arg2spec(line))\n\n # TODO only append when it's not already present\n def add(self, package_name):\n self.raw.append(package_name)\n self.parse()\n\n\ndef unique(seq, key=None):\n \"\"\" Return only unique elements of a sequence\n >>> tuple(unique((1, 2, 3)))\n (1, 2, 3)\n >>> tuple(unique((1, 2, 1, 3)))\n (1, 2, 3)\n Uniqueness can be defined by key keyword\n >>> tuple(unique(['cat', 'mouse', 'dog', 'hen'], key=len))\n ('cat', 'mouse')\n \"\"\"\n seen = set()\n seen_add = seen.add\n if key is None:\n for item in seq:\n if item not in seen:\n seen_add(item)\n yield item\n else: # calculate key\n for item in seq:\n val = key(item)\n if val not in seen:\n seen_add(val)\n yield item\n\n\nclass Environment(object):\n def __init__(self, name=None, filename=None, channels=None,\n dependencies=None, prefix=None):\n self.name = name\n self.filename = filename\n self.prefix = prefix\n self.dependencies = Dependencies(dependencies)\n\n if channels is None:\n channels = []\n self.channels = channels\n\n def add_channels(self, channels):\n self.channels = list(unique(chain.from_iterable((channels, self.channels))))\n\n def remove_channels(self):\n self.channels = []\n\n def to_dict(self):\n d = yaml.dict([('name', self.name)])\n if self.channels:\n d['channels'] = self.channels\n if self.dependencies:\n d['dependencies'] = self.dependencies.raw\n if self.prefix:\n d['prefix'] = self.prefix\n return d\n\n def to_yaml(self, stream=None):\n d = self.to_dict()\n out = compat.u(yaml.dump(d, default_flow_style=False))\n if stream is None:\n return out\n stream.write(compat.b(out, encoding=\"utf-8\"))\n\n def save(self):\n with open(self.filename, \"wb\") as fp:\n self.to_yaml(stream=fp)\n", "path": "conda_env/env.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport os\nfrom collections import OrderedDict\nfrom conda.base.context import context\nfrom conda.cli import common # TODO: this should never have to import form conda.cli\nfrom conda.core.linked_data import linked\nfrom copy import copy\nfrom itertools import chain\n\nfrom . import compat, exceptions, yaml\nfrom .pip_util import add_pip_installed\n\ndef load_from_directory(directory):\n \"\"\"Load and return an ``Environment`` from a given ``directory``\"\"\"\n files = ['environment.yml', 'environment.yaml']\n while True:\n for f in files:\n try:\n return from_file(os.path.join(directory, f))\n except exceptions.EnvironmentFileNotFound:\n pass\n old_directory = directory\n directory = os.path.dirname(directory)\n if directory == old_directory:\n break\n raise exceptions.EnvironmentFileNotFound(files[0])\n\n\n# TODO This should lean more on conda instead of divining it from the outside\n# TODO tests!!!\ndef from_environment(name, prefix, no_builds=False, ignore_channels=False):\n \"\"\"\n Get environment object from prefix\n Args:\n name: The name of environment\n prefix: The path of prefix\n no_builds: Whether has build requirement\n ignore_channels: whether ignore_channels\n\n Returns: Environment object\n \"\"\"\n installed = linked(prefix, ignore_channels=ignore_channels)\n conda_pkgs = copy(installed)\n # json=True hides the output, data is added to installed\n add_pip_installed(prefix, installed, json=True)\n\n pip_pkgs = sorted(installed - conda_pkgs)\n\n if no_builds:\n dependencies = ['='.join((a.name, a.version)) for a in sorted(conda_pkgs)]\n else:\n dependencies = ['='.join((a.name, a.version, a.build)) for a in sorted(conda_pkgs)]\n if len(pip_pkgs) > 0:\n dependencies.append({'pip': ['=='.join(a.rsplit('-', 2)[:2]) for a in pip_pkgs]})\n # conda uses ruamel_yaml which returns a ruamel_yaml.comments.CommentedSeq\n # this doesn't dump correctly using pyyaml\n channels = list(context.channels)\n if not ignore_channels:\n for dist in conda_pkgs:\n if dist.channel not in channels:\n channels.insert(0, dist.channel)\n return Environment(name=name, dependencies=dependencies, channels=channels, prefix=prefix)\n\n\ndef from_yaml(yamlstr, **kwargs):\n \"\"\"Load and return a ``Environment`` from a given ``yaml string``\"\"\"\n data = yaml.load(yamlstr)\n if kwargs is not None:\n for key, value in kwargs.items():\n data[key] = value\n return Environment(**data)\n\n\ndef from_file(filename):\n if not os.path.exists(filename):\n raise exceptions.EnvironmentFileNotFound(filename)\n with open(filename, 'r') as fp:\n yamlstr = fp.read()\n return from_yaml(yamlstr, filename=filename)\n\n\n# TODO test explicitly\nclass Dependencies(OrderedDict):\n def __init__(self, raw, *args, **kwargs):\n super(Dependencies, self).__init__(*args, **kwargs)\n self.raw = raw\n self.parse()\n\n def parse(self):\n if not self.raw:\n return\n\n self.update({'conda': []})\n\n for line in self.raw:\n if isinstance(line, dict):\n self.update(line)\n else:\n self['conda'].append(common.arg2spec(line))\n\n # TODO only append when it's not already present\n def add(self, package_name):\n self.raw.append(package_name)\n self.parse()\n\n\ndef unique(seq, key=None):\n \"\"\" Return only unique elements of a sequence\n >>> tuple(unique((1, 2, 3)))\n (1, 2, 3)\n >>> tuple(unique((1, 2, 1, 3)))\n (1, 2, 3)\n Uniqueness can be defined by key keyword\n >>> tuple(unique(['cat', 'mouse', 'dog', 'hen'], key=len))\n ('cat', 'mouse')\n \"\"\"\n seen = set()\n seen_add = seen.add\n if key is None:\n for item in seq:\n if item not in seen:\n seen_add(item)\n yield item\n else: # calculate key\n for item in seq:\n val = key(item)\n if val not in seen:\n seen_add(val)\n yield item\n\n\nclass Environment(object):\n def __init__(self, name=None, filename=None, channels=None,\n dependencies=None, prefix=None):\n self.name = name\n self.filename = filename\n self.prefix = prefix\n self.dependencies = Dependencies(dependencies)\n\n if channels is None:\n channels = []\n self.channels = channels\n\n def add_channels(self, channels):\n self.channels = list(unique(chain.from_iterable((channels, self.channels))))\n\n def remove_channels(self):\n self.channels = []\n\n def to_dict(self):\n d = yaml.dict([('name', self.name)])\n if self.channels:\n d['channels'] = self.channels\n if self.dependencies:\n d['dependencies'] = self.dependencies.raw\n if self.prefix:\n d['prefix'] = self.prefix\n return d\n\n def to_yaml(self, stream=None):\n d = self.to_dict()\n out = compat.u(yaml.dump(d, default_flow_style=False))\n if stream is None:\n return out\n stream.write(compat.b(out, encoding=\"utf-8\"))\n\n def save(self):\n with open(self.filename, \"wb\") as fp:\n self.to_yaml(stream=fp)\n", "path": "conda_env/env.py"}]} | 2,018 | 222 |
gh_patches_debug_4221 | rasdani/github-patches | git_diff | pystiche__pystiche-534 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Setup CI workflow to build and upload galleries
Although we have logic to download pre-built galleries in the documentation
https://github.com/pystiche/pystiche/blob/65f4d787e44b1ffbf7e5b6e48298ed8c7460e5a9/docs/source/conf.py#L160-L166
the builder isn't running for quite some time, because I have no longer have access to the infrastructure.
Preferably, we should have a CI workflow that does this. The problem is, that without a GPU our gallery takes forever to build. Since CI machines that have a GPU are not free (at least I couldn't find any), we probably need to spend some money to achieve this.
So far I came up with two possible solutions:
1. Build our own custom build server with a GPU and run a self-hosted GitHub Actions workflow on it. This is problematic for a couple of reasons:
1. GitHub actually warns not to do that in public repositories due to security concerns.
2. Buying the machine and especially the GPU is quite expensive and as long no one sponsors this, I'm currently not willing to do this out of my own pocket.
3. We need to maintain the server which will take some time from doing something else on `pystiche`.
2. Spin up a cloud instance with GPU to do the building for us. This solves all the issues of 1. with the disadvantage of being harder to setup. But this also vanishes if we use something like [`cirun.io`](https://cirun.io) (cc @aktech). It looks like other than specifying the type of cloud instance we want to use we can simply use the default GitHub Actions workflow syntax, which is amazing.
The only roadblock I'm currently seeing is that we probably don't need a run every day. If you look at the past commit history, there are quite a few days were no commit is merged so we would be wasting money by running the build every day. My current idea is to run a default CPU workflow that checks if the current commit was already built and only if that is not the case spins up the could instance. From a quick research into this, we probably can use https://github.com/peter-evans/repository-dispatch is a dispatcher.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 import contextlib
2 import os
3 import re
4 import shutil
5 import warnings
6 from datetime import datetime
7 from distutils.util import strtobool
8 from importlib_metadata import metadata as extract_metadata
9 from os import path
10 from unittest import mock
11 from urllib.parse import urljoin
12
13 from sphinx_gallery.sorting import ExampleTitleSortKey, ExplicitOrder
14 from tqdm import tqdm
15
16 import torch
17
18 from pystiche.misc import download_file
19
20 HERE = path.dirname(__file__)
21 PROJECT_ROOT = path.abspath(path.join(HERE, "..", ".."))
22
23
24 def get_bool_env_var(name, default=False):
25 try:
26 return bool(strtobool(os.environ[name]))
27 except KeyError:
28 return default
29
30
31 GITHUB_ACTIONS = get_bool_env_var("GITHUB_ACTIONS")
32 RTD = get_bool_env_var("READTHEDOCS")
33 CI = GITHUB_ACTIONS or RTD or get_bool_env_var("CI")
34
35
36 def project():
37 extension = None
38
39 metadata = extract_metadata("pystiche")
40 project = metadata["name"]
41 author = metadata["author"]
42 copyright = f"{datetime.now().year}, {author}"
43 release = metadata["version"]
44 version = release.split(".dev")[0]
45 config = dict(
46 project=project,
47 author=author,
48 copyright=copyright,
49 release=release,
50 version=version,
51 )
52
53 return extension, config
54
55
56 def autodoc():
57 extensions = [
58 "sphinx.ext.autodoc",
59 "sphinx.ext.napoleon",
60 "sphinx_autodoc_typehints",
61 ]
62
63 config = None
64
65 return extensions, config
66
67
68 def intersphinx():
69 extension = "sphinx.ext.intersphinx"
70 config = dict(
71 intersphinx_mapping={
72 "python": ("https://docs.python.org/3.6", None),
73 "torch": ("https://pytorch.org/docs/stable/", None),
74 "torchvision": ("https://pytorch.org/docs/stable/", None),
75 "PIL": ("https://pillow.readthedocs.io/en/stable/", None),
76 "numpy": ("https://numpy.org/doc/1.18/", None),
77 "requests": ("https://requests.readthedocs.io/en/stable/", None),
78 "matplotlib": ("https://matplotlib.org", None),
79 }
80 )
81 return extension, config
82
83
84 def html():
85 extension = None
86
87 config = dict(html_theme="sphinx_rtd_theme")
88
89 return extension, config
90
91
92 def latex():
93 extension = None
94
95 with open(path.join(HERE, "custom_cmds.tex"), "r") as fh:
96 custom_cmds = fh.read()
97 config = dict(
98 latex_elements={"preamble": custom_cmds},
99 mathjax_inline=[r"\(" + custom_cmds, r"\)"],
100 mathjax_display=[r"\[" + custom_cmds, r"\]"],
101 )
102
103 return extension, config
104
105
106 def bibtex():
107 extension = "sphinxcontrib.bibtex"
108
109 config = dict(bibtex_bibfiles=["references.bib"])
110
111 return extension, config
112
113
114 def doctest():
115 extension = "sphinx.ext.doctest"
116
117 doctest_global_setup = """
118 import torch
119 from torch import nn
120
121 import pystiche
122
123 import warnings
124 warnings.filterwarnings("ignore", category=FutureWarning)
125
126 from unittest import mock
127
128 patcher = mock.patch(
129 "pystiche.enc.models.utils.ModelMultiLayerEncoder.load_state_dict_from_url"
130 )
131 patcher.start()
132 """
133
134 doctest_global_cleanup = """
135 mock.patch.stopall()
136 """
137 config = dict(
138 doctest_global_setup=doctest_global_setup,
139 doctest_global_cleanup=doctest_global_cleanup,
140 )
141
142 return extension, config
143
144
145 def sphinx_gallery():
146 extension = "sphinx_gallery.gen_gallery"
147
148 plot_gallery = get_bool_env_var("PYSTICHE_PLOT_GALLERY", default=not CI)
149 download_gallery = get_bool_env_var("PYSTICHE_DOWNLOAD_GALLERY", default=CI)
150
151 def download():
152 nonlocal extension
153 nonlocal plot_gallery
154
155 # version and release are available as soon as the project config is loaded
156 version = globals()["version"]
157 release = globals()["release"]
158
159 base = "https://download.pystiche.org/galleries/"
160 is_dev = version != release
161 file = "master.zip" if is_dev else f"v{version}.zip"
162
163 url = urljoin(base, file)
164 print(f"Downloading pre-built galleries from {url}")
165 download_file(url, file)
166
167 with contextlib.suppress(FileNotFoundError):
168 shutil.rmtree(path.join(HERE, "galleries"))
169 shutil.unpack_archive(file, extract_dir=".")
170 os.remove(file)
171
172 extension = "sphinx_gallery.load_style"
173 plot_gallery = False
174
175 def show_cuda_memory(func):
176 torch.cuda.reset_peak_memory_stats()
177 out = func()
178
179 stats = torch.cuda.memory_stats()
180 peak_bytes_usage = stats["allocated_bytes.all.peak"]
181 memory = peak_bytes_usage / 1024 ** 2
182
183 return memory, out
184
185 def patch_tqdm():
186 patchers = [mock.patch("tqdm.std._supports_unicode", return_value=True)]
187
188 display = tqdm.display
189 close = tqdm.close
190 displayed = set()
191
192 def display_only_last(self, msg=None, pos=None):
193 if self.n != self.total or self in displayed:
194 return
195
196 display(self, msg=msg, pos=pos)
197 displayed.add(self)
198
199 patchers.append(mock.patch("tqdm.std.tqdm.display", new=display_only_last))
200
201 def close_(self):
202 close(self)
203 with contextlib.suppress(KeyError):
204 displayed.remove(self)
205
206 patchers.append(mock.patch("tqdm.std.tqdm.close", new=close_))
207
208 for patcher in patchers:
209 patcher.start()
210
211 class PysticheExampleTitleSortKey(ExampleTitleSortKey):
212 def __call__(self, filename):
213 # The beginner example *without* pystiche is placed before the example
214 # *with* to clarify the narrative.
215 if filename == "example_nst_without_pystiche.py":
216 return "1"
217 elif filename == "example_nst_with_pystiche.py":
218 return "2"
219 else:
220 return super().__call__(filename)
221
222 def filter_warnings():
223 # See #https://github.com/pytorch/pytorch/issues/60053
224 warnings.filterwarnings(
225 "ignore",
226 category=UserWarning,
227 message=(
228 re.escape(
229 "Named tensors and all their associated APIs are an experimental "
230 "feature and subject to change. Please do not use them for "
231 "anything important until they are released as stable. (Triggered "
232 "internally at /pytorch/c10/core/TensorImpl.h:1156.)"
233 )
234 ),
235 )
236
237 if download_gallery:
238 download()
239
240 if plot_gallery and not torch.cuda.is_available():
241 msg = (
242 "The galleries will be built, but CUDA is not available. "
243 "This will take a long time."
244 )
245 print(msg)
246
247 sphinx_gallery_conf = {
248 "examples_dirs": path.join(PROJECT_ROOT, "examples"),
249 "gallery_dirs": path.join("galleries", "examples"),
250 "filename_pattern": re.escape(os.sep) + r"example_\w+[.]py$",
251 "ignore_pattern": re.escape(os.sep) + r"_\w+[.]py$",
252 "line_numbers": True,
253 "remove_config_comments": True,
254 "plot_gallery": plot_gallery,
255 "subsection_order": ExplicitOrder(
256 [
257 path.join("..", "..", "examples", sub_gallery)
258 for sub_gallery in ("beginner", "advanced")
259 ]
260 ),
261 "within_subsection_order": PysticheExampleTitleSortKey,
262 "show_memory": show_cuda_memory if torch.cuda.is_available() else True,
263 }
264
265 config = dict(sphinx_gallery_conf=sphinx_gallery_conf)
266 filter_warnings()
267
268 patch_tqdm()
269 filter_warnings()
270
271 return extension, config
272
273
274 def logo():
275 extension = None
276
277 config = dict(html_logo="../../logo.svg")
278
279 return extension, config
280
281
282 extensions = []
283 for loader in (
284 project,
285 autodoc,
286 intersphinx,
287 html,
288 latex,
289 bibtex,
290 doctest,
291 sphinx_gallery,
292 logo,
293 ):
294 extension, config = loader()
295
296 if extension:
297 if isinstance(extension, str):
298 extension = (extension,)
299 extensions.extend(extension)
300
301 if config:
302 globals().update(config)
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -158,7 +158,7 @@
base = "https://download.pystiche.org/galleries/"
is_dev = version != release
- file = "master.zip" if is_dev else f"v{version}.zip"
+ file = "main.zip" if is_dev else f"v{version}.zip"
url = urljoin(base, file)
print(f"Downloading pre-built galleries from {url}")
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -158,7 +158,7 @@\n \n base = \"https://download.pystiche.org/galleries/\"\n is_dev = version != release\n- file = \"master.zip\" if is_dev else f\"v{version}.zip\"\n+ file = \"main.zip\" if is_dev else f\"v{version}.zip\"\n \n url = urljoin(base, file)\n print(f\"Downloading pre-built galleries from {url}\")\n", "issue": "Setup CI workflow to build and upload galleries\nAlthough we have logic to download pre-built galleries in the documentation\r\n\r\nhttps://github.com/pystiche/pystiche/blob/65f4d787e44b1ffbf7e5b6e48298ed8c7460e5a9/docs/source/conf.py#L160-L166\r\n\r\nthe builder isn't running for quite some time, because I have no longer have access to the infrastructure.\r\n\r\nPreferably, we should have a CI workflow that does this. The problem is, that without a GPU our gallery takes forever to build. Since CI machines that have a GPU are not free (at least I couldn't find any), we probably need to spend some money to achieve this. \r\n\r\nSo far I came up with two possible solutions:\r\n\r\n1. Build our own custom build server with a GPU and run a self-hosted GitHub Actions workflow on it. This is problematic for a couple of reasons:\r\n 1. GitHub actually warns not to do that in public repositories due to security concerns.\r\n 2. Buying the machine and especially the GPU is quite expensive and as long no one sponsors this, I'm currently not willing to do this out of my own pocket.\r\n 3. We need to maintain the server which will take some time from doing something else on `pystiche`.\r\n2. Spin up a cloud instance with GPU to do the building for us. This solves all the issues of 1. with the disadvantage of being harder to setup. But this also vanishes if we use something like [`cirun.io`](https://cirun.io) (cc @aktech). It looks like other than specifying the type of cloud instance we want to use we can simply use the default GitHub Actions workflow syntax, which is amazing.\r\n\r\nThe only roadblock I'm currently seeing is that we probably don't need a run every day. If you look at the past commit history, there are quite a few days were no commit is merged so we would be wasting money by running the build every day. My current idea is to run a default CPU workflow that checks if the current commit was already built and only if that is not the case spins up the could instance. From a quick research into this, we probably can use https://github.com/peter-evans/repository-dispatch is a dispatcher.\r\n\n", "before_files": [{"content": "import contextlib\nimport os\nimport re\nimport shutil\nimport warnings\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom importlib_metadata import metadata as extract_metadata\nfrom os import path\nfrom unittest import mock\nfrom urllib.parse import urljoin\n\nfrom sphinx_gallery.sorting import ExampleTitleSortKey, ExplicitOrder\nfrom tqdm import tqdm\n\nimport torch\n\nfrom pystiche.misc import download_file\n\nHERE = path.dirname(__file__)\nPROJECT_ROOT = path.abspath(path.join(HERE, \"..\", \"..\"))\n\n\ndef get_bool_env_var(name, default=False):\n try:\n return bool(strtobool(os.environ[name]))\n except KeyError:\n return default\n\n\nGITHUB_ACTIONS = get_bool_env_var(\"GITHUB_ACTIONS\")\nRTD = get_bool_env_var(\"READTHEDOCS\")\nCI = GITHUB_ACTIONS or RTD or get_bool_env_var(\"CI\")\n\n\ndef project():\n extension = None\n\n metadata = extract_metadata(\"pystiche\")\n project = metadata[\"name\"]\n author = metadata[\"author\"]\n copyright = f\"{datetime.now().year}, {author}\"\n release = metadata[\"version\"]\n version = release.split(\".dev\")[0]\n config = dict(\n project=project,\n author=author,\n copyright=copyright,\n release=release,\n version=version,\n )\n\n return extension, config\n\n\ndef autodoc():\n extensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx_autodoc_typehints\",\n ]\n\n config = None\n\n return extensions, config\n\n\ndef intersphinx():\n extension = \"sphinx.ext.intersphinx\"\n config = dict(\n intersphinx_mapping={\n \"python\": (\"https://docs.python.org/3.6\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"torchvision\": (\"https://pytorch.org/docs/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/1.18/\", None),\n \"requests\": (\"https://requests.readthedocs.io/en/stable/\", None),\n \"matplotlib\": (\"https://matplotlib.org\", None),\n }\n )\n return extension, config\n\n\ndef html():\n extension = None\n\n config = dict(html_theme=\"sphinx_rtd_theme\")\n\n return extension, config\n\n\ndef latex():\n extension = None\n\n with open(path.join(HERE, \"custom_cmds.tex\"), \"r\") as fh:\n custom_cmds = fh.read()\n config = dict(\n latex_elements={\"preamble\": custom_cmds},\n mathjax_inline=[r\"\\(\" + custom_cmds, r\"\\)\"],\n mathjax_display=[r\"\\[\" + custom_cmds, r\"\\]\"],\n )\n\n return extension, config\n\n\ndef bibtex():\n extension = \"sphinxcontrib.bibtex\"\n\n config = dict(bibtex_bibfiles=[\"references.bib\"])\n\n return extension, config\n\n\ndef doctest():\n extension = \"sphinx.ext.doctest\"\n\n doctest_global_setup = \"\"\"\nimport torch\nfrom torch import nn\n\nimport pystiche\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\n\nfrom unittest import mock\n\npatcher = mock.patch(\n \"pystiche.enc.models.utils.ModelMultiLayerEncoder.load_state_dict_from_url\"\n)\npatcher.start()\n\"\"\"\n\n doctest_global_cleanup = \"\"\"\nmock.patch.stopall()\n\"\"\"\n config = dict(\n doctest_global_setup=doctest_global_setup,\n doctest_global_cleanup=doctest_global_cleanup,\n )\n\n return extension, config\n\n\ndef sphinx_gallery():\n extension = \"sphinx_gallery.gen_gallery\"\n\n plot_gallery = get_bool_env_var(\"PYSTICHE_PLOT_GALLERY\", default=not CI)\n download_gallery = get_bool_env_var(\"PYSTICHE_DOWNLOAD_GALLERY\", default=CI)\n\n def download():\n nonlocal extension\n nonlocal plot_gallery\n\n # version and release are available as soon as the project config is loaded\n version = globals()[\"version\"]\n release = globals()[\"release\"]\n\n base = \"https://download.pystiche.org/galleries/\"\n is_dev = version != release\n file = \"master.zip\" if is_dev else f\"v{version}.zip\"\n\n url = urljoin(base, file)\n print(f\"Downloading pre-built galleries from {url}\")\n download_file(url, file)\n\n with contextlib.suppress(FileNotFoundError):\n shutil.rmtree(path.join(HERE, \"galleries\"))\n shutil.unpack_archive(file, extract_dir=\".\")\n os.remove(file)\n\n extension = \"sphinx_gallery.load_style\"\n plot_gallery = False\n\n def show_cuda_memory(func):\n torch.cuda.reset_peak_memory_stats()\n out = func()\n\n stats = torch.cuda.memory_stats()\n peak_bytes_usage = stats[\"allocated_bytes.all.peak\"]\n memory = peak_bytes_usage / 1024 ** 2\n\n return memory, out\n\n def patch_tqdm():\n patchers = [mock.patch(\"tqdm.std._supports_unicode\", return_value=True)]\n\n display = tqdm.display\n close = tqdm.close\n displayed = set()\n\n def display_only_last(self, msg=None, pos=None):\n if self.n != self.total or self in displayed:\n return\n\n display(self, msg=msg, pos=pos)\n displayed.add(self)\n\n patchers.append(mock.patch(\"tqdm.std.tqdm.display\", new=display_only_last))\n\n def close_(self):\n close(self)\n with contextlib.suppress(KeyError):\n displayed.remove(self)\n\n patchers.append(mock.patch(\"tqdm.std.tqdm.close\", new=close_))\n\n for patcher in patchers:\n patcher.start()\n\n class PysticheExampleTitleSortKey(ExampleTitleSortKey):\n def __call__(self, filename):\n # The beginner example *without* pystiche is placed before the example\n # *with* to clarify the narrative.\n if filename == \"example_nst_without_pystiche.py\":\n return \"1\"\n elif filename == \"example_nst_with_pystiche.py\":\n return \"2\"\n else:\n return super().__call__(filename)\n\n def filter_warnings():\n # See #https://github.com/pytorch/pytorch/issues/60053\n warnings.filterwarnings(\n \"ignore\",\n category=UserWarning,\n message=(\n re.escape(\n \"Named tensors and all their associated APIs are an experimental \"\n \"feature and subject to change. Please do not use them for \"\n \"anything important until they are released as stable. (Triggered \"\n \"internally at /pytorch/c10/core/TensorImpl.h:1156.)\"\n )\n ),\n )\n\n if download_gallery:\n download()\n\n if plot_gallery and not torch.cuda.is_available():\n msg = (\n \"The galleries will be built, but CUDA is not available. \"\n \"This will take a long time.\"\n )\n print(msg)\n\n sphinx_gallery_conf = {\n \"examples_dirs\": path.join(PROJECT_ROOT, \"examples\"),\n \"gallery_dirs\": path.join(\"galleries\", \"examples\"),\n \"filename_pattern\": re.escape(os.sep) + r\"example_\\w+[.]py$\",\n \"ignore_pattern\": re.escape(os.sep) + r\"_\\w+[.]py$\",\n \"line_numbers\": True,\n \"remove_config_comments\": True,\n \"plot_gallery\": plot_gallery,\n \"subsection_order\": ExplicitOrder(\n [\n path.join(\"..\", \"..\", \"examples\", sub_gallery)\n for sub_gallery in (\"beginner\", \"advanced\")\n ]\n ),\n \"within_subsection_order\": PysticheExampleTitleSortKey,\n \"show_memory\": show_cuda_memory if torch.cuda.is_available() else True,\n }\n\n config = dict(sphinx_gallery_conf=sphinx_gallery_conf)\n filter_warnings()\n\n patch_tqdm()\n filter_warnings()\n\n return extension, config\n\n\ndef logo():\n extension = None\n\n config = dict(html_logo=\"../../logo.svg\")\n\n return extension, config\n\n\nextensions = []\nfor loader in (\n project,\n autodoc,\n intersphinx,\n html,\n latex,\n bibtex,\n doctest,\n sphinx_gallery,\n logo,\n):\n extension, config = loader()\n\n if extension:\n if isinstance(extension, str):\n extension = (extension,)\n extensions.extend(extension)\n\n if config:\n globals().update(config)\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "import contextlib\nimport os\nimport re\nimport shutil\nimport warnings\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom importlib_metadata import metadata as extract_metadata\nfrom os import path\nfrom unittest import mock\nfrom urllib.parse import urljoin\n\nfrom sphinx_gallery.sorting import ExampleTitleSortKey, ExplicitOrder\nfrom tqdm import tqdm\n\nimport torch\n\nfrom pystiche.misc import download_file\n\nHERE = path.dirname(__file__)\nPROJECT_ROOT = path.abspath(path.join(HERE, \"..\", \"..\"))\n\n\ndef get_bool_env_var(name, default=False):\n try:\n return bool(strtobool(os.environ[name]))\n except KeyError:\n return default\n\n\nGITHUB_ACTIONS = get_bool_env_var(\"GITHUB_ACTIONS\")\nRTD = get_bool_env_var(\"READTHEDOCS\")\nCI = GITHUB_ACTIONS or RTD or get_bool_env_var(\"CI\")\n\n\ndef project():\n extension = None\n\n metadata = extract_metadata(\"pystiche\")\n project = metadata[\"name\"]\n author = metadata[\"author\"]\n copyright = f\"{datetime.now().year}, {author}\"\n release = metadata[\"version\"]\n version = release.split(\".dev\")[0]\n config = dict(\n project=project,\n author=author,\n copyright=copyright,\n release=release,\n version=version,\n )\n\n return extension, config\n\n\ndef autodoc():\n extensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx_autodoc_typehints\",\n ]\n\n config = None\n\n return extensions, config\n\n\ndef intersphinx():\n extension = \"sphinx.ext.intersphinx\"\n config = dict(\n intersphinx_mapping={\n \"python\": (\"https://docs.python.org/3.6\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"torchvision\": (\"https://pytorch.org/docs/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/1.18/\", None),\n \"requests\": (\"https://requests.readthedocs.io/en/stable/\", None),\n \"matplotlib\": (\"https://matplotlib.org\", None),\n }\n )\n return extension, config\n\n\ndef html():\n extension = None\n\n config = dict(html_theme=\"sphinx_rtd_theme\")\n\n return extension, config\n\n\ndef latex():\n extension = None\n\n with open(path.join(HERE, \"custom_cmds.tex\"), \"r\") as fh:\n custom_cmds = fh.read()\n config = dict(\n latex_elements={\"preamble\": custom_cmds},\n mathjax_inline=[r\"\\(\" + custom_cmds, r\"\\)\"],\n mathjax_display=[r\"\\[\" + custom_cmds, r\"\\]\"],\n )\n\n return extension, config\n\n\ndef bibtex():\n extension = \"sphinxcontrib.bibtex\"\n\n config = dict(bibtex_bibfiles=[\"references.bib\"])\n\n return extension, config\n\n\ndef doctest():\n extension = \"sphinx.ext.doctest\"\n\n doctest_global_setup = \"\"\"\nimport torch\nfrom torch import nn\n\nimport pystiche\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\n\nfrom unittest import mock\n\npatcher = mock.patch(\n \"pystiche.enc.models.utils.ModelMultiLayerEncoder.load_state_dict_from_url\"\n)\npatcher.start()\n\"\"\"\n\n doctest_global_cleanup = \"\"\"\nmock.patch.stopall()\n\"\"\"\n config = dict(\n doctest_global_setup=doctest_global_setup,\n doctest_global_cleanup=doctest_global_cleanup,\n )\n\n return extension, config\n\n\ndef sphinx_gallery():\n extension = \"sphinx_gallery.gen_gallery\"\n\n plot_gallery = get_bool_env_var(\"PYSTICHE_PLOT_GALLERY\", default=not CI)\n download_gallery = get_bool_env_var(\"PYSTICHE_DOWNLOAD_GALLERY\", default=CI)\n\n def download():\n nonlocal extension\n nonlocal plot_gallery\n\n # version and release are available as soon as the project config is loaded\n version = globals()[\"version\"]\n release = globals()[\"release\"]\n\n base = \"https://download.pystiche.org/galleries/\"\n is_dev = version != release\n file = \"main.zip\" if is_dev else f\"v{version}.zip\"\n\n url = urljoin(base, file)\n print(f\"Downloading pre-built galleries from {url}\")\n download_file(url, file)\n\n with contextlib.suppress(FileNotFoundError):\n shutil.rmtree(path.join(HERE, \"galleries\"))\n shutil.unpack_archive(file, extract_dir=\".\")\n os.remove(file)\n\n extension = \"sphinx_gallery.load_style\"\n plot_gallery = False\n\n def show_cuda_memory(func):\n torch.cuda.reset_peak_memory_stats()\n out = func()\n\n stats = torch.cuda.memory_stats()\n peak_bytes_usage = stats[\"allocated_bytes.all.peak\"]\n memory = peak_bytes_usage / 1024 ** 2\n\n return memory, out\n\n def patch_tqdm():\n patchers = [mock.patch(\"tqdm.std._supports_unicode\", return_value=True)]\n\n display = tqdm.display\n close = tqdm.close\n displayed = set()\n\n def display_only_last(self, msg=None, pos=None):\n if self.n != self.total or self in displayed:\n return\n\n display(self, msg=msg, pos=pos)\n displayed.add(self)\n\n patchers.append(mock.patch(\"tqdm.std.tqdm.display\", new=display_only_last))\n\n def close_(self):\n close(self)\n with contextlib.suppress(KeyError):\n displayed.remove(self)\n\n patchers.append(mock.patch(\"tqdm.std.tqdm.close\", new=close_))\n\n for patcher in patchers:\n patcher.start()\n\n class PysticheExampleTitleSortKey(ExampleTitleSortKey):\n def __call__(self, filename):\n # The beginner example *without* pystiche is placed before the example\n # *with* to clarify the narrative.\n if filename == \"example_nst_without_pystiche.py\":\n return \"1\"\n elif filename == \"example_nst_with_pystiche.py\":\n return \"2\"\n else:\n return super().__call__(filename)\n\n def filter_warnings():\n # See #https://github.com/pytorch/pytorch/issues/60053\n warnings.filterwarnings(\n \"ignore\",\n category=UserWarning,\n message=(\n re.escape(\n \"Named tensors and all their associated APIs are an experimental \"\n \"feature and subject to change. Please do not use them for \"\n \"anything important until they are released as stable. (Triggered \"\n \"internally at /pytorch/c10/core/TensorImpl.h:1156.)\"\n )\n ),\n )\n\n if download_gallery:\n download()\n\n if plot_gallery and not torch.cuda.is_available():\n msg = (\n \"The galleries will be built, but CUDA is not available. \"\n \"This will take a long time.\"\n )\n print(msg)\n\n sphinx_gallery_conf = {\n \"examples_dirs\": path.join(PROJECT_ROOT, \"examples\"),\n \"gallery_dirs\": path.join(\"galleries\", \"examples\"),\n \"filename_pattern\": re.escape(os.sep) + r\"example_\\w+[.]py$\",\n \"ignore_pattern\": re.escape(os.sep) + r\"_\\w+[.]py$\",\n \"line_numbers\": True,\n \"remove_config_comments\": True,\n \"plot_gallery\": plot_gallery,\n \"subsection_order\": ExplicitOrder(\n [\n path.join(\"..\", \"..\", \"examples\", sub_gallery)\n for sub_gallery in (\"beginner\", \"advanced\")\n ]\n ),\n \"within_subsection_order\": PysticheExampleTitleSortKey,\n \"show_memory\": show_cuda_memory if torch.cuda.is_available() else True,\n }\n\n config = dict(sphinx_gallery_conf=sphinx_gallery_conf)\n filter_warnings()\n\n patch_tqdm()\n filter_warnings()\n\n return extension, config\n\n\ndef logo():\n extension = None\n\n config = dict(html_logo=\"../../logo.svg\")\n\n return extension, config\n\n\nextensions = []\nfor loader in (\n project,\n autodoc,\n intersphinx,\n html,\n latex,\n bibtex,\n doctest,\n sphinx_gallery,\n logo,\n):\n extension, config = loader()\n\n if extension:\n if isinstance(extension, str):\n extension = (extension,)\n extensions.extend(extension)\n\n if config:\n globals().update(config)\n", "path": "docs/source/conf.py"}]} | 3,438 | 127 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.